- Add App Context section describing the left/right panel layout
- Add App Features section with icon locations (history, theme, upload, export, clear)
- Dynamically inject model name into system prompt via {{MODEL_NAME}} placeholder
- Expand edit_diagram tool description with usage guidelines
The prose plugin was overriding text colors for markdown elements
(bold, headings, etc.) in user message bubbles, causing text to
blend with the dark primary background.
Added conditional styling that forces all child elements in user
messages to use text-primary-foreground color with !important to
override prose defaults.
- Add toggle button in chat input area to switch between min and sketch themes
- Show warning dialog before switching (clears messages and diagram)
- Persist theme selection in localStorage
- Default theme is minimal (hides shapes sidebar)
* feat: add markdown rendering for chat messages
- Add react-markdown and @tailwindcss/typography for markdown support
- Use prose styling for assistant message formatting
- Fix Radix ScrollArea viewport horizontal overflow issue
- Add CSS fix for viewport width constraint
* feat: add resizable chat panel
- Replace fixed width layout with react-resizable-panels
- Chat panel can be resized by dragging the handle
- Panel is collapsible with min 15% and max 50% width
- Ctrl+B keyboard shortcut still works for toggle
- Remove 15s streaming timeout detection (too slow, added complexity)
- Remove status indicator (issue resolved by switching model)
- Remove streamingError state and related refs
- Simplify onFinish callback (remove 503 detection logging)
- Remove errorHandler function (use default AI SDK errors)
The real fix was switching from global.* to us.* Bedrock model.
This removes ~134 lines of unnecessary complexity.
- Add docs/ai-providers.md with detailed setup instructions for all providers
- Update README.md, README_CN.md, README_JA.md with provider guide links
- Add model capability requirements note
- Simplify provider list in READMEs
Closes#79
Addresses conflict between right-click drag and browser back gesture in
Chromium-based browsers. Shows browser confirmation dialog when user
tries to navigate away, preventing accidental page exits.
Closes#80
- Add error prop to ChatInput to detect error state
- Update isDisabled logic to allow retry when there's an error
- Pass combined error (SDK error + streamingError) to ChatInput
When Bedrock returns 503 ServiceUnavailableException before streaming
starts, AI SDK's onError fires but status may not transition to "ready".
This fix ensures the input is re-enabled when an error occurs, allowing
users to retry their request.
Fixed an issue where files from previous examples would persist when clicking on "Animated Diagram" or "Creative Drawing" examples that don't require image uploads.
When images are included in chat messages, the AI SDK telemetry with
recordInputs: true sends base64 image data to Langfuse. Langfuse then
attempts to upload these images to media storage, causing 1m31s timeouts.
Setting recordInputs: false prevents this while still capturing user
text input via setTraceInput().
The React state update (setChartXML) is async, so chartXMLRef wasn't updated
when edit_diagram tool callback checked it. Now we update the ref directly
in onFormSubmit, handleRegenerate, and handleEditMessage before sending.
DrawIO iframe export was unreliable on Vercel due to network latency,
causing edit_diagram tool to hang. Now uses chartXML from context directly,
falling back to export only when no cached XML exists.
Bedrock streaming responses don't auto-report token usage to OpenTelemetry.
This fix manually sets span attributes (ai.usage.promptTokens, gen_ai.usage.input_tokens)
from the AI SDK onFinish callback to ensure Langfuse captures token counts.
- Add Zod schema validation for log-feedback and log-save endpoints
- Create singleton LangfuseClient to avoid per-request instantiation
- Simplify log-save to only flag trace (no XML content sent)
- Use generic error messages to prevent info leakage
- Update log-feedback API to find existing chat trace by sessionId and attach score to it
- Update log-save API to create span on existing chat trace instead of standalone trace
- Add thumbs up/down feedback buttons on assistant messages
- Add message regeneration and edit functionality
- Add save dialog with format selection (drawio, png, svg)
- Pass sessionId through components for Langfuse linking
- Remove default bedrock provider requirement
- Auto-detect provider when only one API key is configured
- Show helpful error when no keys or multiple keys without AI_PROVIDER
- Fixes#73
- Extract system prompts to dedicated lib/system-prompts.ts module
- Add extended system prompt (~4000 tokens) for models with higher cache minimums (Opus 4.5, Haiku 4.5)
- Clean up debug logs while preserving informational and cache-related logs
- Improve code formatting and organization in chat route
* feat: add trace-level input/output to Langfuse observability
- Add @langfuse/client and @langfuse/tracing dependencies
- Wrap POST handler with observe() for proper tracing
- Use updateActiveTrace() to set trace input, output, sessionId, userId
- Filter Next.js HTTP spans in shouldExportSpan so AI SDK spans become root traces
- Enable recordInputs/recordOutputs in experimental_telemetry
* refactor: extract Langfuse logic to separate lib/langfuse.ts module
- Add handleExportWithoutHistory function for fetching current diagram state without saving to history
- Update onFetchChart to accept saveToHistory parameter (defaults to true)
- edit_diagram tool now fetches with saveToHistory=false since it only needs the current state
- Only the initial form submission saves to history as intended
- Switch from Geist to Plus Jakarta Sans (body) and JetBrains Mono (code)
- Add visual diff display for edit_diagram tool showing search/replace pairs
- Update color palette to clean modern OKLCH-based scheme
- Improve chat message display with better styling and animations
- Add syntax-highlighted code blocks for XML/JSON output
- Improve scrollbar and shadow utilities
- Add save button in chat input area with download icon
- Create SaveDialog component for filename input
- Export current diagram as .drawio file format
- Support custom filename with default timestamp-based name
Closes#53
- Add XML validation in handleDisplayChart before calling onDisplayChart
- Only update previousXML ref when validation passes to prevent state desync
- Add console error logging for failed validations
Fixes#5
When models like DeepSeek (deepseek-chat, deepseek-reasoner) receive image
inputs, they return a cryptic error about 'unknown variant image_url'.
This change detects such errors and shows a clear message asking users
to remove the image or switch to a vision-capable model.
Fixes#42
- Remove sponsor iframe from chat panel header
- Add notice about switching from Opus 4.5 to Haiku 4.5 due to high traffic
- Add sponsor button next to Support & Contact section title
- Update all i18n about pages (EN, CN, JA)
- Add essential draw.io XML structure rules to system prompt
- Include critical rules about mxCell nesting (all must be direct children of root)
- Add shape/vertex and connector/edge examples with proper structure
- Improve tool description for display_diagram with validation rules
- Update xml_guide.md with better swimlane examples showing flat structure
- Add client-side XML validation to catch nested mxCell errors early
Helps address issues #40 (local Ollama models not working) and #39 (mxCell nesting errors)