* feat: add support for custom AI Gateway base URL
- Add createGateway support with configurable baseURL
- Allow AI_GATEWAY_BASE_URL environment variable for:
* Local development with custom Gateway
* Self-hosted AI Gateway deployments
* Enterprise proxy configurations
- Maintain backward compatibility: defaults to Vercel Gateway when not set
- Update documentation with usage examples and configuration notes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: remove errant character in error message
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
* feat: add Vercel AI Gateway support
- Updated environment configuration to include AI_GATEWAY_API_KEY for unified access to multiple AI providers.
- Added gateway provider to the list of supported AI providers in the codebase.
- Enhanced documentation to explain the usage of Vercel AI Gateway and its model format.
This change simplifies authentication and allows users to switch between providers seamlessly.
* Update package
@ai-sdk/gateway to latest version 2.0.21
* refactor: replace text-based edit_diagram with ID-based operations
- Add applyDiagramOperations() function using DOMParser for ID lookup
- New schema: operations array with type (update/add/delete), cell_id, new_xml
- Update chat-panel.tsx handler for new operations format
- Update OperationsDisplay component to show operation type and cell_id
- Simplify system prompts with new ID-based examples
- Add ID validation for add operations
- Add warning for edges referencing deleted cells
* fix: add ID validation to update operation and remove dead code
- Add ID mismatch validation to update operation (consistency with add)
- Remove orphaned replaceXMLParts function (~300 lines of dead code)
- Update cell_id schema description for clarity
- Add unit tests for applyDiagramOperations (11 tests)
* feat: add minimal style mode toggle for faster diagram generation
- Add Minimal/Styled toggle switch in chat input UI
- When enabled, removes color/style instructions from system prompt
- Faster generation with plain black/white diagrams
- Improves XML auto-fix: handle foreign tags, extra closing tags, trailing garbage
- Fix isMxCellXmlComplete to strip Anthropic function-calling wrappers
- Add debug logging for truncation detection diagnosis
* fix: prevent false XML parse errors during streaming
- Escape unescaped & characters in convertToLegalXml() before DOMParser validation
- Only log console.error for final output, not during streaming updates
- Prevents Next.js dev mode error overlay from showing for expected streaming states
* refactor: simplify LLM XML format to output bare mxCells only
- Update wrapWithMxFile() to always add root cells (id=0, id=1) automatically
- LLM now generates only mxCell elements starting from id=2 (no wrapper tags)
- Update system prompts and tool descriptions with new format instructions
- Update cached responses to remove root cells and wrapper tags
- Update truncation detection to check for complete mxCell endings
- Update documentation in xml_guide.md
* fix: address PR review issues for XML format refactor
- Fix critical bug: inconsistent truncation check using old </root> pattern
- Fix stale error message referencing </root> tag
- Add isMxCellXmlComplete() helper for consistent truncation detection
- Improve regex patterns to handle any attribute order in root cells
- Update wrapWithMxFile JSDoc to document root cell removal behavior
* fix: handle non-self-closing root cells in wrapWithMxFile regex
* feat: add append_diagram tool for truncation continuation
When LLM output hits maxOutputTokens mid-generation, instead of
failing with an error loop, the system now:
1. Detects truncation (missing </root> in XML)
2. Stores partial XML and tells LLM to use new append_diagram tool
3. LLM continues generating from where it stopped
4. Fragments are accumulated until XML is complete
5. Server limits to 5 steps via stepCountIs(5)
Key changes:
- Add append_diagram tool definition in route.ts
- Add append_diagram handler in chat-panel.tsx
- Track continuation mode separately from error mode
- Continuation mode has unlimited retries (not counted against limit)
- Error mode still limited to MAX_AUTO_RETRY_COUNT (1)
- Update system prompts to document append_diagram tool
* fix: show friendly message and yellow badge for truncated output
- Add yellow 'Truncated' badge in UI instead of red 'Error' when XML is incomplete
- Show friendly error message for toolUse.input is invalid errors
- Built on top of append_diagram continuation feature
* refactor: remove debug logs and simplify truncation state
- Remove all debug console.log statements
- Remove isContinuationModeRef, derive from partialXmlRef.current.length > 0
* docs: fix append_diagram instructions for consistency
- Change 'Do NOT include' to 'Do NOT start with' (clearer intent)
- Add <mxCell id="0"> to prohibited start patterns
- Change 'closing tags </root></mxGraphModel>' to just '</root>' (wrapWithMxFile handles the rest)
* fix: improve Azure provider auto-detection and validation (#223)
- Fix detectProvider() to only detect Azure when it has complete config
(both AZURE_API_KEY and AZURE_RESOURCE_NAME or AZURE_BASE_URL)
- Add validation in validateProviderCredentials() for Azure to provide
clear error messages when configuration is incomplete
- Update docs/ai-providers.md to clarify Azure requires resource name
* docs: add Azure reasoning options to documentation
Add NEXT_PUBLIC_MAX_EXTRACTED_CHARS environment variable to allow
configuring the maximum characters extracted from PDF and text files.
Defaults to 150000 (150k chars) if not set.
Previously AZURE_RESOURCE_NAME was documented in env.example but not
actually used in the code. This caused Azure OpenAI configuration to fail
when users set AZURE_RESOURCE_NAME instead of AZURE_BASE_URL.
Changes:
- Read AZURE_RESOURCE_NAME from environment and pass to createAzure()
- resourceName constructs endpoint: https://{name}.openai.azure.com/openai/v1
- baseURL takes precedence over resourceName when both are set
- Updated env.example with clearer documentation
Fixes#208
- Add client-side PDF text extraction using unpdf library
- Support text files (.txt, .md, .json, .csv, .py, .js, .ts, etc.)
- Add file preview with character count for PDF/text files
- Add 150k character limit for extracted content
- Highlight Paper to Diagram example with NEW badge
- Fix React hydration error by adding explicit IDs to ResizablePanelGroup
- Remove code duplication by centralizing file utilities in pdf-utils.ts
- Add AI provider settings to config panel (provider, model, API key, base URL)
- Support 7 providers: OpenAI, Anthropic, Google, Azure, OpenRouter, DeepSeek, SiliconFlow
- Client API keys stored in localStorage, never stored on server
- Client settings override server env vars when provided
- Skip server credential validation when client provides API key
- Bypass usage limits (request/token/TPM) when using own API key
- Add /api/config endpoint for fetching usage limits
- Add privacy notices to settings dialog, about pages, and quota toast
- Add clear settings button to reset saved API keys
- Update README files (EN/CN/JA) with BYOK documentation
Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
* feat: support minimax model with XML wrapping fix
- Add wrapWithMxFile utility to properly wrap XML for draw.io
- Fix 'Not a diagram file' error when model generates raw <root> XML
- Add supportsPromptCaching check for conditional caching
- Only enable Bedrock prompt caching for Claude models
* docs: update model mention to minimax-m2 across About pages and READMEs
- Update tooltip in chat-panel.tsx to mention minimax-m2 model change
- Update English, Chinese, and Japanese About pages with model change info
- Update English, Chinese, and Japanese READMEs with demo site model note
---------
Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
* feat: add daily token limit with actual usage tracking
- Add DAILY_TOKEN_LIMIT env var for configurable daily token limit
- Track actual tokens from Bedrock API response metadata (not estimates)
- Server sends inputTokens + cachedInputTokens + outputTokens via messageMetadata
- Client increments token count in onFinish callback with actual usage
- Add NaN guards to prevent corrupted localStorage values
- Add token limit toast notification with quota display
- Remove client-side token estimation (was blocking legitimate requests)
- Switch to js-tiktoken for client compatibility (pure JS, no WASM)
* feat: add TPM (tokens per minute) rate limiting
- Add 50k tokens/min client-side rate limit
- Track tokens per minute with automatic minute rollover
- Check TPM limit after daily limits pass
- Show toast when rate limit reached
- NaN guards for localStorage values
* feat: make TPM limit configurable via TPM_LIMIT env var
* chore: restore cache debug logs
* fix: prevent race condition in TPM tracking
checkTPMLimit was resetting TPM count to 0 when checking, which
overwrote the count saved by incrementTPMCount. Now checkTPMLimit
only reads and incrementTPMCount handles all writes.
* chore: improve TPM limit error message clarity
- Save and restore chat messages, XML snapshots, session ID, and diagram XML to localStorage
- Restore diagram when DrawIO becomes ready (using new onLoad callback)
- Change close protection default to false since auto-save handles persistence
- Clear localStorage when clearing chat
- Handle edge cases: undefined edit fields, empty chartXML, missing access code header
- Add validation for orphaned mxPoint elements in validateMxCellStructure()
- Add cleanup of orphaned mxPoint elements in convertToLegalXml()
- Orphaned mxPoints cause 'Could not add object mxPoint' errors in draw.io
- mxPoint elements must have 'as' attribute or be inside <Array as="points">
Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
- Add 6th strategy: match by value attribute (label text)
- Add 7th strategy: normalized whitespace match
- Remove lastProcessedIndex tracking - always search from beginning
- Pairs may not be in document order, so sequential tracking was unreliable
Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
- Add Biome as formatter and linter (replaces Prettier)
- Configure Husky + lint-staged for pre-commit hooks
- Add VS Code settings for format on save
- Ignore components/ui/ (shadcn generated code)
- Remove semicolons, use 4-space indent
- Reformat all files to new style
- Add App Context section describing the left/right panel layout
- Add App Features section with icon locations (history, theme, upload, export, clear)
- Dynamically inject model name into system prompt via {{MODEL_NAME}} placeholder
- Expand edit_diagram tool description with usage guidelines
When images are included in chat messages, the AI SDK telemetry with
recordInputs: true sends base64 image data to Langfuse. Langfuse then
attempts to upload these images to media storage, causing 1m31s timeouts.
Setting recordInputs: false prevents this while still capturing user
text input via setTraceInput().
Bedrock streaming responses don't auto-report token usage to OpenTelemetry.
This fix manually sets span attributes (ai.usage.promptTokens, gen_ai.usage.input_tokens)
from the AI SDK onFinish callback to ensure Langfuse captures token counts.
- Add Zod schema validation for log-feedback and log-save endpoints
- Create singleton LangfuseClient to avoid per-request instantiation
- Simplify log-save to only flag trace (no XML content sent)
- Use generic error messages to prevent info leakage
- Update log-feedback API to find existing chat trace by sessionId and attach score to it
- Update log-save API to create span on existing chat trace instead of standalone trace
- Add thumbs up/down feedback buttons on assistant messages
- Add message regeneration and edit functionality
- Add save dialog with format selection (drawio, png, svg)
- Pass sessionId through components for Langfuse linking
- Remove default bedrock provider requirement
- Auto-detect provider when only one API key is configured
- Show helpful error when no keys or multiple keys without AI_PROVIDER
- Fixes#73
- Extract system prompts to dedicated lib/system-prompts.ts module
- Add extended system prompt (~4000 tokens) for models with higher cache minimums (Opus 4.5, Haiku 4.5)
- Clean up debug logs while preserving informational and cache-related logs
- Improve code formatting and organization in chat route
* feat: add trace-level input/output to Langfuse observability
- Add @langfuse/client and @langfuse/tracing dependencies
- Wrap POST handler with observe() for proper tracing
- Use updateActiveTrace() to set trace input, output, sessionId, userId
- Filter Next.js HTTP spans in shouldExportSpan so AI SDK spans become root traces
- Enable recordInputs/recordOutputs in experimental_telemetry
* refactor: extract Langfuse logic to separate lib/langfuse.ts module
- Add essential draw.io XML structure rules to system prompt
- Include critical rules about mxCell nesting (all must be direct children of root)
- Add shape/vertex and connector/edge examples with proper structure
- Improve tool description for display_diagram with validation rules
- Update xml_guide.md with better swimlane examples showing flat structure
- Add client-side XML validation to catch nested mxCell errors early
Helps address issues #40 (local Ollama models not working) and #39 (mxCell nesting errors)
- Install @ai-sdk/deepseek package
- Add DeepSeek provider support to lib/ai-providers.ts
- Add DeepSeek configuration to env.example
- Update README.md with DeepSeek in provider list
- Support both default and custom base URL for DeepSeek
* fix: add custom anthropic baseURL
* feat: add baseURL support for all AI providers
- Add GOOGLE_BASE_URL for Google Generative AI
- Add AZURE_BASE_URL for Azure OpenAI
- Add OLLAMA_BASE_URL support (was documented but not implemented)
- Add OPENROUTER_BASE_URL for OpenRouter
- Fix missing semicolon in Anthropic case
- Update env.example with new environment variables
Closes#20
---------
Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
- Add lib/cached-responses.ts with pre-generated XML for 4 example prompts
- Modify chat API route to check cache before calling AI
- Cache returns instant response (~0.26s) vs AI generation (~20-25s)
- Add "(cached for instant response)" text to example panel
- Cache only activates for first message with empty diagram
* fix: correct anthropic beta header config for fine-grained tool streaming
- Use bedrock.anthropicBeta for Bedrock provider (not additionalModelRequestFields)
- Use top-level headers for direct Anthropic API
- Update @ai-sdk/amazon-bedrock to 3.0.62
- Add headers support to ModelConfig interface
* fix: update @ai-sdk/amazon-bedrock to 3.0.62 for tool streaming support