Commit Graph

82 Commits

Author SHA1 Message Date
Dayuan Jiang
85cb441e26 feat: multi-provider model configuration with UI/UX improvements (#355)
* feat: add multi-provider model configuration

- Add model config dialog for managing multiple AI providers
- Support for OpenAI, Anthropic, Google, Azure, Bedrock, OpenRouter, DeepSeek, SiliconFlow, Ollama, and AI Gateway
- Add model selector dropdown in chat panel header
- Add API key validation endpoint
- Add custom model ID input with keyboard navigation
- Fix hover highlight in Command component
- Add suggested models for each provider including latest Claude 4.5 series
- Store configuration locally in browser

* feat: improve model config UI and move selector to chat input

- Move model selector from header to chat input (left of send button)
- Add per-model validation status (queued, running, valid, invalid)
- Filter model selector to only show verified models
- Add editable model IDs in config dialog
- Add custom model input field alongside suggested models dropdown
- Fix hover states on provider buttons and select triggers
- Update OpenAI suggested models with GPT-5 series
- Add alert-dialog component for delete confirmation

* refactor: revert shadcn component changes, apply hover fix at usage site

* feat: add AWS credentials support for Bedrock provider

- Add AWS Access Key ID, Secret Access Key, Region fields for Bedrock
- Show different credential fields based on provider type
- Update validation API to handle Bedrock with AWS credentials
- Add region selector with common AWS regions

* fix: reset Test button after validation completes

* fix: reset validation button to Test after success

* fix: complete bedrock support and UI/UX improvements

- Add bedrock to ALLOWED_CLIENT_PROVIDERS for client credentials
- Pass AWS credentials through full chain (headers → API → provider)
- Replace non-existent GPT-5 models with real ones (o1, o3-mini)
- Add accessibility: aria-labels, focus-visible rings, inline errors
- Add more AWS regions (Ohio, London, Paris, Mumbai, Seoul, São Paulo)
- Fix setTimeout cleanup with useRef on component unmount
- Fix TypeScript type consistency in getSelectedAIConfig fallback

* chore: remove unused code

- Remove unused setAccessCodeRequired state in chat-panel.tsx
- Remove unused getSelectedModel export in model-config.ts

* fix: UI/UX improvements for model configuration dialog

- Add gradient header styling with icon badge
- Change Configuration section icon from Key to Settings2
- Add duplicate model detection with warning banner and inline removal
- Filter out already-added models from suggestions dropdown
- Add type-to-confirm for deleting providers with 3+ models
- Enhance delete confirmation dialog with warning icon
- Improve model selector discoverability (show model name + chevron)
- Add truncation for long model names with title tooltip
- Remove AI provider settings from Settings dialog (now in Model Config)
- Extract ValidationButton into reusable component

* fix: prevent duplicate model IDs within same provider

- Block adding model if ID already exists in provider
- Block editing model ID to match existing model in provider

* fix: improve duplicate model ID notifications

- Add toast notification when trying to add duplicate model
- Allow free typing when editing model ID, validate on blur
- Show warning toast instead of blocking input

* fix: improve duplicate model validation UX in config dialog

- Add inline error display for duplicate model IDs
- Show red border on input when error exists
- Validate on blur with shake animation for edit errors
- Prevent saving empty model names
- Clear errors when user starts typing
- Simplify error styling (small red text, no heavy chips)
2025-12-22 22:36:36 +09:00
Dayuan Jiang
f087b54ee4 feat: add get_shape_library tool for AI icon discovery (#335)
* feat: add get_shape_library tool for AI icon discovery

- Add server-side tool that returns shape library documentation
- AI can fetch icon/shape names on-demand before generating diagrams
- Includes path traversal protection and input sanitization
- Library index embedded in tool description for discoverability
- Supports 33 libraries: AWS, Azure, GCP, Kubernetes, Cisco, etc.

* fix: improve get_shape_library error handling and imports

- Move fs/path imports to top of file (avoid dynamic imports per call)
- Distinguish file-not-found vs other errors in catch block
- Include invalid input in validation error message
- Log unexpected errors for debugging

* docs: add get_shape_library to system prompt tool list

- Add Tool4 (get_shape_library) to available tools section
- Add usage guidance in 'Choose the right tool' section
- Update AWS icons note to reference get_shape_library for icon discovery

* fix: display get_shape_library tool output in chat UI

* fix: correct state check for get_shape_library output display

* fix: make get_shape_library output respect fold state

* style: auto-format with Biome

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-12-20 23:19:49 +09:00
Dayuan Jiang
cd76fa615e fix: edit_diagram streaming and JSON repair improvements (#271)
- Add shared editDiagramOriginalXmlRef between streaming preview and tool handler
  to avoid conflicts when applying operations (fixes "cell already exists" errors)
- Add JSON repair preprocessing to fix LLM-generated malformed JSON like `:=`
- Filter out tool calls with invalid/undefined inputs from interrupted streaming
- Remove perf console logs
2025-12-15 21:28:31 +09:00
Dayuan Jiang
f175276872 refactor: replace text-based edit_diagram with ID-based operations (#267)
* refactor: replace text-based edit_diagram with ID-based operations

- Add applyDiagramOperations() function using DOMParser for ID lookup
- New schema: operations array with type (update/add/delete), cell_id, new_xml
- Update chat-panel.tsx handler for new operations format
- Update OperationsDisplay component to show operation type and cell_id
- Simplify system prompts with new ID-based examples
- Add ID validation for add operations
- Add warning for edges referencing deleted cells

* fix: add ID validation to update operation and remove dead code

- Add ID mismatch validation to update operation (consistency with add)
- Remove orphaned replaceXMLParts function (~300 lines of dead code)
- Update cell_id schema description for clarity
- Add unit tests for applyDiagramOperations (11 tests)
2025-12-15 14:22:56 +09:00
Dayuan Jiang
f743219c03 feat: add minimal style mode toggle for faster diagram generation (#260)
* feat: add minimal style mode toggle for faster diagram generation

- Add Minimal/Styled toggle switch in chat input UI
- When enabled, removes color/style instructions from system prompt
- Faster generation with plain black/white diagrams
- Improves XML auto-fix: handle foreign tags, extra closing tags, trailing garbage
- Fix isMxCellXmlComplete to strip Anthropic function-calling wrappers
- Add debug logging for truncation detection diagnosis

* fix: prevent false XML parse errors during streaming

- Escape unescaped & characters in convertToLegalXml() before DOMParser validation
- Only log console.error for final output, not during streaming updates
- Prevents Next.js dev mode error overlay from showing for expected streaming states
2025-12-14 19:38:40 +09:00
Dayuan Jiang
0851b32b67 refactor: simplify LLM XML format to output bare mxCells only (#254)
* refactor: simplify LLM XML format to output bare mxCells only

- Update wrapWithMxFile() to always add root cells (id=0, id=1) automatically
- LLM now generates only mxCell elements starting from id=2 (no wrapper tags)
- Update system prompts and tool descriptions with new format instructions
- Update cached responses to remove root cells and wrapper tags
- Update truncation detection to check for complete mxCell endings
- Update documentation in xml_guide.md

* fix: address PR review issues for XML format refactor

- Fix critical bug: inconsistent truncation check using old </root> pattern
- Fix stale error message referencing </root> tag
- Add isMxCellXmlComplete() helper for consistent truncation detection
- Improve regex patterns to handle any attribute order in root cells
- Update wrapWithMxFile JSDoc to document root cell removal behavior

* fix: handle non-self-closing root cells in wrapWithMxFile regex
2025-12-14 14:04:44 +09:00
Dayuan Jiang
66bd0e5493 feat: add append_diagram tool and improve truncation handling (#252)
* feat: add append_diagram tool for truncation continuation

When LLM output hits maxOutputTokens mid-generation, instead of
failing with an error loop, the system now:

1. Detects truncation (missing </root> in XML)
2. Stores partial XML and tells LLM to use new append_diagram tool
3. LLM continues generating from where it stopped
4. Fragments are accumulated until XML is complete
5. Server limits to 5 steps via stepCountIs(5)

Key changes:
- Add append_diagram tool definition in route.ts
- Add append_diagram handler in chat-panel.tsx
- Track continuation mode separately from error mode
- Continuation mode has unlimited retries (not counted against limit)
- Error mode still limited to MAX_AUTO_RETRY_COUNT (1)
- Update system prompts to document append_diagram tool

* fix: show friendly message and yellow badge for truncated output

- Add yellow 'Truncated' badge in UI instead of red 'Error' when XML is incomplete
- Show friendly error message for toolUse.input is invalid errors
- Built on top of append_diagram continuation feature

* refactor: remove debug logs and simplify truncation state

- Remove all debug console.log statements
- Remove isContinuationModeRef, derive from partialXmlRef.current.length > 0

* docs: fix append_diagram instructions for consistency

- Change 'Do NOT include' to 'Do NOT start with' (clearer intent)
- Add <mxCell id="0"> to prohibited start patterns
- Change 'closing tags </root></mxGraphModel>' to just '</root>' (wrapWithMxFile handles the rest)
2025-12-14 12:34:34 +09:00
Dayuan Jiang
987dc9f026 fix: add configurable MAX_OUTPUT_TOKENS to prevent truncation (#251)
- Add MAX_OUTPUT_TOKENS env var (fixes output truncation with Bedrock)
- Remove redundant fixToolCallInputs function
- Remove jsonrepair dependency
- Consolidate duplicate lastMessage/userInputText variables
2025-12-13 23:28:41 +09:00
Dayuan Jiang
e321ba7959 chore: optimize Vercel costs by removing analytics and configuring functions (#238)
- Create vercel.json with optimized function settings:
  - Chat API: 512MB memory, 120s timeout
  - Other APIs: 256MB memory, 10s timeout
- Remove @vercel/analytics package and imports
- Reduce chat route maxDuration from 300s to 120s

Expected savings: $2-4/month, keeping costs under $20 included credit
2025-12-12 16:13:06 +09:00
Dayuan Jiang
97cc0a07dc fix: disable history XML replacement by default (#217)
Some models (e.g. minimax) copy placeholder text instead of generating
fresh XML, causing tool call validation failures and infinite loops.

Added ENABLE_HISTORY_XML_REPLACE env var (default: false) to control
this behavior.
2025-12-11 17:36:18 +09:00
Biki Kalita
a047a6ff97 feat: Display AI reasoning/thinking blocks in chat interface (#152)
* feat: Add reasoning/thinking blocks display in chat interface

* feat: add multi-provider options support and replace custom reasoning UI with AI Elements

* resolve conflicting reasoning configs and correct provider-specific reasoning parameters

* try to solve conflict

* fix: simplify reasoning display and remove unnecessary dependencies

- Remove Streamdown dependency (~5MB) - reasoning is plain text only
- Fix Bedrock providerOptions merging for Claude reasoning configs
- Remove unsupported DeepSeek reasoning configuration
- Clean up unused environment variables (REASONING_BUDGET_TOKENS, REASONING_EFFORT, DEEPSEEK_REASONING_*)
- Remove dead commented code from route.ts

Reasoning blocks contain plain thinking text and don't need markdown/diagram/code rendering.

* feat: comprehensive reasoning support improvements

Major improvements:
- Auto-enable reasoning display for all supported models
- Fix provider-specific reasoning configurations
- Remove unnecessary Streamdown dependency (~5MB)
- Clean up debug logging

Provider changes:
- OpenAI: Auto-enable reasoningSummary for o1/o3/gpt-5 models
- Google: Auto-enable includeThoughts for Gemini 2.5/3 models
- Bedrock: Restrict reasoningConfig to only Claude/Nova (fixes MiniMax error)
- Ollama: Add thinking support for qwen3-like models

Other improvements:
- Remove ENABLE_REASONING toggle (always enabled)
- Fix Bedrock providerOptions merging for Claude
- Simplify reasoning component (plain text rendering)
- Clean up unused environment variables

* fix: critical bugs and documentation gaps in reasoning support

Critical fixes:
- Fix Bedrock shallow merge bug (deep merge preserves anthropicBeta + reasoningConfig)
- Add parseInt validation with parseIntSafe helper (prevents NaN errors)
- Validate all numeric env vars with min/max ranges

Documentation improvements:
- Add BEDROCK_REASONING_BUDGET_TOKENS and BEDROCK_REASONING_EFFORT to env.example
- Add OLLAMA_ENABLE_THINKING to env.example
- Update JSDoc with accurate env var list and ranges

Code cleanup:
- Remove debug console.log statements from route.ts
- Refactor duplicate providerOptions assignments

---------

Co-authored-by: Dayuan Jiang <34411969+DayuanJiang@users.noreply.github.com>
Co-authored-by: Dayuan Jiang <jdy.toh@gmail.com>
2025-12-11 00:24:43 +09:00
Dayuan Jiang
d2ba133eaf feat: add PDF and text file upload support (#205)
- Add client-side PDF text extraction using unpdf library
- Support text files (.txt, .md, .json, .csv, .py, .js, .ts, etc.)
- Add file preview with character count for PDF/text files
- Add 150k character limit for extracted content
- Highlight Paper to Diagram example with NEW badge
- Fix React hydration error by adding explicit IDs to ResizablePanelGroup
- Remove code duplication by centralizing file utilities in pdf-utils.ts
2025-12-10 21:32:35 +09:00
Dayuan Jiang
43e5993f47 fix: improve LLM diagram context awareness and image preview (#202)
- Add replaceHistoricalToolInputs to replace XML in tool calls with placeholders
- Send both previousXml and current xml so LLM can understand user's manual edits
- Update system message to mark current XML as authoritative source of truth
- Fix React StrictMode issue with blob URL cleanup in FilePreviewList
- Add unoptimized prop to Image components for blob URLs
2025-12-10 18:04:37 +09:00
Dayuan Jiang
97ab82e027 feat: add bring-your-own-API-key support (#186)
- Add AI provider settings to config panel (provider, model, API key, base URL)
- Support 7 providers: OpenAI, Anthropic, Google, Azure, OpenRouter, DeepSeek, SiliconFlow
- Client API keys stored in localStorage, never stored on server
- Client settings override server env vars when provided
- Skip server credential validation when client provides API key
- Bypass usage limits (request/token/TPM) when using own API key
- Add /api/config endpoint for fetching usage limits
- Add privacy notices to settings dialog, about pages, and quota toast
- Add clear settings button to reset saved API keys
- Update README files (EN/CN/JA) with BYOK documentation

Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
2025-12-09 17:50:07 +09:00
Dayuan Jiang
967d63c57e feat: support minimax model (#185)
* feat: support minimax model with XML wrapping fix

- Add wrapWithMxFile utility to properly wrap XML for draw.io
- Fix 'Not a diagram file' error when model generates raw <root> XML
- Add supportsPromptCaching check for conditional caching
- Only enable Bedrock prompt caching for Claude models

* docs: update model mention to minimax-m2 across About pages and READMEs

- Update tooltip in chat-panel.tsx to mention minimax-m2 model change
- Update English, Chinese, and Japanese About pages with model change info
- Update English, Chinese, and Japanese READMEs with demo site model note

---------

Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
2025-12-09 15:53:59 +09:00
singledog957
95c5a75ca3 feat: Show detailed error messages instead of generic 'Internal server error' (#144) (#154)
* feat: Show detailed error messages instead of generic 'Internal server error' (#144)

* refactor: simplify error handling logic per feedback

* refactor: imported AI SDK error handler

* fix: remove unused import and expand sensitive data filter

- Remove unused NoSuchModelError import
- Add 'secret', 'password', 'credential' to sensitive data filter

---------

Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2025-12-08 20:52:18 +09:00
Dayuan Jiang
622829b903 feat: add daily token limit with actual usage tracking (#171)
* feat: add daily token limit with actual usage tracking

- Add DAILY_TOKEN_LIMIT env var for configurable daily token limit
- Track actual tokens from Bedrock API response metadata (not estimates)
- Server sends inputTokens + cachedInputTokens + outputTokens via messageMetadata
- Client increments token count in onFinish callback with actual usage
- Add NaN guards to prevent corrupted localStorage values
- Add token limit toast notification with quota display
- Remove client-side token estimation (was blocking legitimate requests)
- Switch to js-tiktoken for client compatibility (pure JS, no WASM)

* feat: add TPM (tokens per minute) rate limiting

- Add 50k tokens/min client-side rate limit
- Track tokens per minute with automatic minute rollover
- Check TPM limit after daily limits pass
- Show toast when rate limit reached
- NaN guards for localStorage values

* feat: make TPM limit configurable via TPM_LIMIT env var

* chore: restore cache debug logs

* fix: prevent race condition in TPM tracking

checkTPMLimit was resetting TPM count to 0 when checking, which
overwrote the count saved by incrementTPMCount. Now checkTPMLimit
only reads and incrementTPMCount handles all writes.

* chore: improve TPM limit error message clarity
2025-12-08 18:56:34 +09:00
Dayuan Jiang
ecea8a6005 fix: use static maxDuration value for Next.js 16 compatibility (#160) 2025-12-08 10:56:37 +09:00
Dayuan Jiang
ee9267d54c chore: make maxDuration configurable via env variable (#157)
Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
2025-12-08 10:20:52 +09:00
Dayuan Jiang
8c431ee6ed fix: preserve message parts order in chat display (#151)
- Fix bug where text after tool calls was merged with initial text
- Group consecutive text/file parts into bubbles while keeping tools in order
- Parts now display as: plan -> tool_result -> additional text
- Remove debug logs from fixToolCallInputs function

Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
2025-12-07 19:56:31 +09:00
Dayuan Jiang
86420a42c6 fix: implement client-side caching for example diagrams (#150)
- Add client-side cache check in onFormSubmit to bypass API calls for example prompts
- Use findCachedResponse to match input against cached examples
- Directly set messages with cached tool response when example matches
- Hide regenerate button for cached example responses (toolCallId starts with 'cached-')
- Prevents unnecessary API calls when using example buttons

Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
2025-12-07 19:36:09 +09:00
Dayuan Jiang
0baf21fadb fix: validate XML before displaying diagram to catch duplicate IDs (#147)
- Add validation to loadDiagram in diagram-context, returns error or null
- display_diagram and edit_diagram tools now check validation result
- Return error to AI agent with state: output-error so it can retry
- Skip validation for trusted sources (localStorage, history, internal templates)
- Add debug logging for tool call inputs to diagnose Bedrock API issues
2025-12-07 14:38:15 +09:00
Biki Kalita
8b578a456e fix: Remove hardcoded temperature parameter to support models that don't support it (#133)
* Fix: remove hardcoded temperature parameter to support reasoning models

* feat: make temperature configurable via AI_TEMPERATURE env var

- Instead of removing temperature entirely, make it optional via env var
- Set AI_TEMPERATURE=0 for deterministic output (recommended for diagrams)
- Leave unset for models that don't support temperature (e.g., GPT-5.1 reasoning)

* docs: add AI_TEMPERATURE env var documentation

- Update env.example with AI_TEMPERATURE option
- Update README.md configuration section
- Add Temperature Setting section in ai-providers.md

* docs: add TEMPERATURE env var documentation

- Update env.example with TEMPERATURE option
- Update README.md, README_CN.md, README_JA.md configuration sections
- Add Temperature Setting section in ai-providers.md
- Update route.ts to use TEMPERATURE env var

---------

Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
2025-12-07 01:34:59 +09:00
Dayuan Jiang
3894abd9ed feat: add tool call JSON repair and Bedrock compatibility (#127)
- Add fixToolCallInputs() to fix Bedrock API requirement (JSON object, not string)
- Add experimental_repairToolCall for malformed JSON from model
- Add stepCountIs(5) limit to prevent infinite loops
- Update edit_diagram tool description with JSON escaping warning

Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
2025-12-07 00:40:13 +09:00
Dayuan Jiang
cbb92bd636 fix: set maxDuration to 60 for Vercel hobby plan (#122) 2025-12-06 18:09:30 +09:00
Dayuan Jiang
8d898d8adc fix: revert maxDuration to static value (Next.js requirement) (#121) 2025-12-06 18:04:23 +09:00
Dayuan Jiang
1e0b1ed970 feat: make maxDuration configurable via MAX_DURATION env (#120) 2025-12-06 17:47:50 +09:00
Dayuan Jiang
e893bd60f9 fix: resolve biome lint errors and memory leak in file preview (#118)
- Disable noisy biome rules (noExplicitAny, useExhaustiveDependencies, etc.)
- Fix memory leak in file-preview-list.tsx with useRef pattern
- Separate unmount cleanup into dedicated useEffect
- Add ToolPartLike interface for type safety in chat-message-display
- Add accessibility attributes (role, tabIndex, onKeyDown)
- Replace autoFocus with useEffect focus pattern
- Minor syntax improvements (optional chaining, key fixes)
2025-12-06 16:18:26 +09:00
Dayuan Jiang
150eb1ff63 chore: add Biome for formatting and linting (#116)
- Add Biome as formatter and linter (replaces Prettier)
- Configure Husky + lint-staged for pre-commit hooks
- Add VS Code settings for format on save
- Ignore components/ui/ (shadcn generated code)
- Remove semicolons, use 4-space indent
- Reformat all files to new style
2025-12-06 12:46:40 +09:00
Dayuan Jiang
215a101f54 fix: revert edit_diagram tool description to original (#115) 2025-12-06 12:41:01 +09:00
Dayuan Jiang
e00938d9d3 feat: enhance system prompt with app context and dynamic model name (#114)
- Add App Context section describing the left/right panel layout
- Add App Features section with icon locations (history, theme, upload, export, clear)
- Dynamically inject model name into system prompt via {{MODEL_NAME}} placeholder
- Expand edit_diagram tool description with usage guidelines
2025-12-06 12:37:37 +09:00
Twelveeee
3fb349fb3e clear button cant clear error msg & feat: add setting dialog and add accesscode (#77)
* fix: clear button cant clear error msg

* new: add setting dialog and add accesscode

* fix: address review feedback - dark mode, types, formatting

* feat: only show Settings button when access code is required

* refactor: rename ACCESS_CODES to ACCESS_CODE_LIST

---------

Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2025-12-05 22:09:34 +09:00
Dayuan Jiang
ed29e32ba3 feat: restore Langfuse observability integration (#103)
- Add lib/langfuse.ts with client, trace input/output, telemetry config
- Add instrumentation.ts for OpenTelemetry setup with Langfuse span processor
- Add /api/log-save endpoint for logging diagram saves
- Add /api/log-feedback endpoint for thumbs up/down feedback
- Update chat route with sessionId tracking and telemetry
- Add feedback buttons (thumbs up/down) to chat messages
- Add sessionId tracking throughout the app
- Update env.example with Langfuse configuration
- Add @langfuse/client, @langfuse/otel, @langfuse/tracing, @opentelemetry/sdk-trace-node
2025-12-05 21:15:02 +09:00
Dayuan Jiang
4cd78dc561 chore: remove complex 503 error handling code (#102)
- Remove 15s streaming timeout detection (too slow, added complexity)
- Remove status indicator (issue resolved by switching model)
- Remove streamingError state and related refs
- Simplify onFinish callback (remove 503 detection logging)
- Remove errorHandler function (use default AI SDK errors)

The real fix was switching from global.* to us.* Bedrock model.
This removes ~134 lines of unnecessary complexity.
2025-12-05 20:18:19 +09:00
Dayuan Jiang
e0c5d966e3 feat: add image upload validation with 2MB limit and max 5 files (#101)
- Add 2MB file size limit with client and server-side validation
- Add max 5 files limit per upload
- Add sonner toast library for better error notifications
- Create ErrorToast component with keyboard accessibility
- Batch multiple validation errors into single toast
- Validate file size in all upload methods (input, paste, drag-drop)
- Add server-side validation in /api/chat endpoint
2025-12-05 19:30:50 +09:00
Dayuan Jiang
95160f5a21 fix: handle Bedrock 503 streaming errors with timeout detection (#92)
- Add 15s streaming timeout to detect mid-stream stalls (e.g., Bedrock 503)
- Add stop() call to allow user retry after timeout
- Add streamingError state for timeout-detected errors
- Improve server-side error logging for empty usage detection
- Add user-friendly error messages for ServiceUnavailable and Throttling errors
2025-12-05 14:23:47 +09:00
dayuan.jiang
ff6f130f8a refactor: remove Langfuse observability integration
- Delete lib/langfuse.ts, instrumentation.ts
- Remove API routes: log-save, log-feedback
- Remove feedback buttons (thumbs up/down) from chat
- Remove sessionId tracking throughout codebase
- Remove @langfuse/*, @opentelemetry dependencies
- Clean up env.example
2025-12-05 01:30:02 +09:00
dayuan.jiang
46cbc3354c fix: add manual token usage reporting to Langfuse for Bedrock streaming
Bedrock streaming responses don't auto-report token usage to OpenTelemetry.
This fix manually sets span attributes (ai.usage.promptTokens, gen_ai.usage.input_tokens)
from the AI SDK onFinish callback to ensure Langfuse captures token counts.
2025-12-05 00:26:02 +09:00
Dayuan Jiang
3534cb13f7 refactor: extract system prompts and add extended prompt for Opus/Haiku 4.5 (#71)
- Extract system prompts to dedicated lib/system-prompts.ts module
- Add extended system prompt (~4000 tokens) for models with higher cache minimums (Opus 4.5, Haiku 4.5)
- Clean up debug logs while preserving informational and cache-related logs
- Improve code formatting and organization in chat route
2025-12-04 13:26:06 +09:00
Dayuan Jiang
9d9613a8d1 feat: add trace-level input/output to Langfuse observability (#69)
* feat: add trace-level input/output to Langfuse observability

- Add @langfuse/client and @langfuse/tracing dependencies
- Wrap POST handler with observe() for proper tracing
- Use updateActiveTrace() to set trace input, output, sessionId, userId
- Filter Next.js HTTP spans in shouldExportSpan so AI SDK spans become root traces
- Enable recordInputs/recordOutputs in experimental_telemetry

* refactor: extract Langfuse logic to separate lib/langfuse.ts module
2025-12-04 11:24:26 +09:00
Dayuan Jiang
fa1b02ad78 feat: integrate Langfuse for LLM observability (#66)
* feat: integrate Langfuse for LLM observability

- Add instrumentation.ts with Langfuse OpenTelemetry exporter
- Enable experimental telemetry on streamText calls
- Add instrumentationHook to Next.js config
- Install required dependencies (@vercel/otel, langfuse-vercel, etc.)

* feat: add optional Langfuse observability integration

- Add session tracking with unique sessionId per conversation
- Add user tracking via IP address (x-forwarded-for header)
- Make telemetry conditional - only enabled if LANGFUSE_PUBLIC_KEY is set
- Add environment variable validation in instrumentation.ts
- Add sessionId validation (type check + 200 char limit)
- Update env.example with Langfuse configuration docs
- Remove unused langfuse-vercel and @vercel/otel packages

* fix: remove deprecated instrumentationHook (enabled by default in Next.js 15)
2025-12-04 00:23:09 +09:00
Dayuan Jiang
595f24857a fix: show user-friendly error when model doesn't support images (#55)
When models like DeepSeek (deepseek-chat, deepseek-reasoner) receive image
inputs, they return a cryptic error about 'unknown variant image_url'.
This change detects such errors and shows a clear message asking users
to remove the image or switch to a vision-capable model.

Fixes #42
2025-12-03 19:49:58 +09:00
Dayuan Jiang
a8e627f1f8 feat: add XML structure guide to system prompt for smaller models (#51)
- Add essential draw.io XML structure rules to system prompt
- Include critical rules about mxCell nesting (all must be direct children of root)
- Add shape/vertex and connector/edge examples with proper structure
- Improve tool description for display_diagram with validation rules
- Update xml_guide.md with better swimlane examples showing flat structure
- Add client-side XML validation to catch nested mxCell errors early

Helps address issues #40 (local Ollama models not working) and #39 (mxCell nesting errors)
2025-12-03 16:14:53 +09:00
Dayuan Jiang
5b31216917 feat: cache example prompt responses to save tokens (#34)
- Add lib/cached-responses.ts with pre-generated XML for 4 example prompts
- Modify chat API route to check cache before calling AI
- Cache returns instant response (~0.26s) vs AI generation (~20-25s)
- Add "(cached for instant response)" text to example panel
- Cache only activates for first message with empty diagram
2025-12-01 14:07:50 +09:00
Dayuan Jiang
c7d0260328 feat: add Bedrock prompt caching for system and conversation messages (#32)
* feat: add Bedrock prompt caching for system and conversation messages

- Add cache point to system message (2558+ tokens cached)
- Add cache point to last assistant message in conversation history
- This caches the entire conversation prefix for subsequent requests
- Reduces latency and costs for multi-turn conversations

* refactor: remove duplicated system prompt
2025-12-01 10:43:33 +09:00
Dayuan Jiang
d2d4dd01cc fix: filter out messages with empty content arrays for Bedrock API (#31)
* fix: filter out messages with empty content arrays for Bedrock API

The convertToModelMessages function from AI SDK can produce messages with
empty content arrays when assistant messages have only tool call parts or
when tool results aren't properly converted. Bedrock API rejects these with
400 errors. This fix filters out invalid messages before sending to the API.

* fix: add diagnostic logging for empty message content

Added logging to capture the original UI message structure when empty content
is detected after conversion. This helps debug the root cause while the
filter provides a safety net for Bedrock API compatibility.
2025-12-01 01:15:43 +09:00
Dayuan Jiang
b4679f6598 fix: increase maxDuration to 300s for Fluid Compute (#30) 2025-12-01 00:46:40 +09:00
Dayuan Jiang
0d0d553e23 fix: correct anthropic beta header config for fine-grained tool streaming (#27)
* fix: correct anthropic beta header config for fine-grained tool streaming

- Use bedrock.anthropicBeta for Bedrock provider (not additionalModelRequestFields)
- Use top-level headers for direct Anthropic API
- Update @ai-sdk/amazon-bedrock to 3.0.62
- Add headers support to ModelConfig interface

* fix: update @ai-sdk/amazon-bedrock to 3.0.62 for tool streaming support
2025-11-30 16:34:42 +09:00
dayuan.jiang
7a6a7eaf7c docs: Update examples with new prompts and demo images
- Add Examples section to README with 2-column grid layout
- Include demo images for GCP, AWS, Azure, animated connectors, and cat
- Update example panel buttons with clearer labels
- Add animated connector example button
- Add instruction for animated connectors in chat route
2025-11-17 15:12:16 +09:00
dayuan.jiang
4a3abc2e39 add multiple provider 2025-11-15 13:36:42 +09:00