Fixed an issue where files from previous examples would persist when clicking on "Animated Diagram" or "Creative Drawing" examples that don't require image uploads.
When images are included in chat messages, the AI SDK telemetry with
recordInputs: true sends base64 image data to Langfuse. Langfuse then
attempts to upload these images to media storage, causing 1m31s timeouts.
Setting recordInputs: false prevents this while still capturing user
text input via setTraceInput().
The React state update (setChartXML) is async, so chartXMLRef wasn't updated
when edit_diagram tool callback checked it. Now we update the ref directly
in onFormSubmit, handleRegenerate, and handleEditMessage before sending.
DrawIO iframe export was unreliable on Vercel due to network latency,
causing edit_diagram tool to hang. Now uses chartXML from context directly,
falling back to export only when no cached XML exists.
Bedrock streaming responses don't auto-report token usage to OpenTelemetry.
This fix manually sets span attributes (ai.usage.promptTokens, gen_ai.usage.input_tokens)
from the AI SDK onFinish callback to ensure Langfuse captures token counts.
- Add Zod schema validation for log-feedback and log-save endpoints
- Create singleton LangfuseClient to avoid per-request instantiation
- Simplify log-save to only flag trace (no XML content sent)
- Use generic error messages to prevent info leakage
- Update log-feedback API to find existing chat trace by sessionId and attach score to it
- Update log-save API to create span on existing chat trace instead of standalone trace
- Add thumbs up/down feedback buttons on assistant messages
- Add message regeneration and edit functionality
- Add save dialog with format selection (drawio, png, svg)
- Pass sessionId through components for Langfuse linking
- Remove default bedrock provider requirement
- Auto-detect provider when only one API key is configured
- Show helpful error when no keys or multiple keys without AI_PROVIDER
- Fixes#73
- Extract system prompts to dedicated lib/system-prompts.ts module
- Add extended system prompt (~4000 tokens) for models with higher cache minimums (Opus 4.5, Haiku 4.5)
- Clean up debug logs while preserving informational and cache-related logs
- Improve code formatting and organization in chat route
* feat: add trace-level input/output to Langfuse observability
- Add @langfuse/client and @langfuse/tracing dependencies
- Wrap POST handler with observe() for proper tracing
- Use updateActiveTrace() to set trace input, output, sessionId, userId
- Filter Next.js HTTP spans in shouldExportSpan so AI SDK spans become root traces
- Enable recordInputs/recordOutputs in experimental_telemetry
* refactor: extract Langfuse logic to separate lib/langfuse.ts module
- Add handleExportWithoutHistory function for fetching current diagram state without saving to history
- Update onFetchChart to accept saveToHistory parameter (defaults to true)
- edit_diagram tool now fetches with saveToHistory=false since it only needs the current state
- Only the initial form submission saves to history as intended
- Switch from Geist to Plus Jakarta Sans (body) and JetBrains Mono (code)
- Add visual diff display for edit_diagram tool showing search/replace pairs
- Update color palette to clean modern OKLCH-based scheme
- Improve chat message display with better styling and animations
- Add syntax-highlighted code blocks for XML/JSON output
- Improve scrollbar and shadow utilities
- Add save button in chat input area with download icon
- Create SaveDialog component for filename input
- Export current diagram as .drawio file format
- Support custom filename with default timestamp-based name
Closes#53
- Add XML validation in handleDisplayChart before calling onDisplayChart
- Only update previousXML ref when validation passes to prevent state desync
- Add console error logging for failed validations
Fixes#5
When models like DeepSeek (deepseek-chat, deepseek-reasoner) receive image
inputs, they return a cryptic error about 'unknown variant image_url'.
This change detects such errors and shows a clear message asking users
to remove the image or switch to a vision-capable model.
Fixes#42
- Remove sponsor iframe from chat panel header
- Add notice about switching from Opus 4.5 to Haiku 4.5 due to high traffic
- Add sponsor button next to Support & Contact section title
- Update all i18n about pages (EN, CN, JA)
- Add essential draw.io XML structure rules to system prompt
- Include critical rules about mxCell nesting (all must be direct children of root)
- Add shape/vertex and connector/edge examples with proper structure
- Improve tool description for display_diagram with validation rules
- Update xml_guide.md with better swimlane examples showing flat structure
- Add client-side XML validation to catch nested mxCell errors early
Helps address issues #40 (local Ollama models not working) and #39 (mxCell nesting errors)
When sending a message, the history was being added twice because:
1. handleExport() triggers exportDiagram() which adds to history
2. AI responds and loadDiagram() is called, which internally triggers
another export event in DrawIO, adding a duplicate entry
Added expectHistoryExportRef flag to track user-initiated exports and
only add to history when the export was explicitly requested.
- Redesign English about page to mirror README.md content
- Add Chinese (/about/cn) and Japanese (/about/ja) versions
- Include language switcher, features, examples with images
- Add multi-provider support section and contact info
- Add GitHub Sponsors iframe button to chat panel header
- Update README with badges and language switcher
- Add Chinese README (README_CN.md)
- Add Japanese README (README_JA.md)
- Reorganize examples section in README
- Install @ai-sdk/deepseek package
- Add DeepSeek provider support to lib/ai-providers.ts
- Add DeepSeek configuration to env.example
- Update README.md with DeepSeek in provider list
- Support both default and custom base URL for DeepSeek
* fix: add custom anthropic baseURL
* feat: add baseURL support for all AI providers
- Add GOOGLE_BASE_URL for Google Generative AI
- Add AZURE_BASE_URL for Azure OpenAI
- Add OLLAMA_BASE_URL support (was documented but not implemented)
- Add OPENROUTER_BASE_URL for OpenRouter
- Fix missing semicolon in Anthropic case
- Update env.example with new environment variables
Closes#20
---------
Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
- Add lib/cached-responses.ts with pre-generated XML for 4 example prompts
- Modify chat API route to check cache before calling AI
- Cache returns instant response (~0.26s) vs AI generation (~20-25s)
- Add "(cached for instant response)" text to example panel
- Cache only activates for first message with empty diagram
* feat: add Bedrock prompt caching for system and conversation messages
- Add cache point to system message (2558+ tokens cached)
- Add cache point to last assistant message in conversation history
- This caches the entire conversation prefix for subsequent requests
- Reduces latency and costs for multi-turn conversations
* refactor: remove duplicated system prompt
* fix: filter out messages with empty content arrays for Bedrock API
The convertToModelMessages function from AI SDK can produce messages with
empty content arrays when assistant messages have only tool call parts or
when tool results aren't properly converted. Bedrock API rejects these with
400 errors. This fix filters out invalid messages before sending to the API.
* fix: add diagnostic logging for empty message content
Added logging to capture the original UI message structure when empty content
is detected after conversion. This helps debug the root cause while the
filter provides a safety net for Bedrock API compatibility.