Commit Graph

74 Commits

Author SHA1 Message Date
dayuan.jiang
62e07f5f9c feat: add append_diagram tool for truncation continuation
When LLM output hits maxOutputTokens mid-generation, instead of
failing with an error loop, the system now:

1. Detects truncation (missing </root> in XML)
2. Stores partial XML and tells LLM to use new append_diagram tool
3. LLM continues generating from where it stopped
4. Fragments are accumulated until XML is complete
5. Server limits to 5 steps via stepCountIs(5)

Key changes:
- Add append_diagram tool definition in route.ts
- Add append_diagram handler in chat-panel.tsx
- Track continuation mode separately from error mode
- Continuation mode has unlimited retries (not counted against limit)
- Error mode still limited to MAX_AUTO_RETRY_COUNT (1)
- Update system prompts to document append_diagram tool
2025-12-14 09:38:47 +09:00
Dayuan Jiang
987dc9f026 fix: add configurable MAX_OUTPUT_TOKENS to prevent truncation (#251)
- Add MAX_OUTPUT_TOKENS env var (fixes output truncation with Bedrock)
- Remove redundant fixToolCallInputs function
- Remove jsonrepair dependency
- Consolidate duplicate lastMessage/userInputText variables
2025-12-13 23:28:41 +09:00
Shashi kiran M S
c9b60bfdb2 feat: Add a new chat button with a confirmation modal (#229)
* feat: Add a new chat button with a confirmation modal

* Fix for PR comments

* fix: add error handling and proper cleanup in handleNewChat

- Add try-catch for localStorage operations to handle quota exceeded,
  private browsing, and other storage errors
- Use handleFileChange([]) instead of setFiles([]) to properly clear
  pdfData Map alongside files
- Only show success toast when localStorage operations succeed
- Show warning toast if localStorage fails but chat state is cleared

---------

Co-authored-by: Dayuan Jiang <jdy.toh@gmail.com>
2025-12-12 10:08:18 +09:00
Twelveeee
f170bb41ae fix:custom model setting bug (#227)
* fix:custom model setting bug

* refactor: consolidate aiProvider checks for cleaner code

---------

Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2025-12-12 09:33:07 +09:00
Biki Kalita
77a25d2543 Persist processed tool calls to prevent replay after chat restore (#224) 2025-12-11 20:48:48 +09:00
Dayuan Jiang
b9da24dd6d fix: limit auto-retry to 3 attempts and enforce quota checks (#219)
- Add MAX_AUTO_RETRY_COUNT (3) to prevent infinite retry loops
- Check token and TPM limits before each auto-retry
- Reset retry counter on user-initiated messages
- Show toast notification when limits are reached

Fixes issue where models returning invalid tool inputs caused 45+ API
requests due to sendAutomaticallyWhen having no retry limit or quota check.
2025-12-11 17:56:40 +09:00
Dayuan Jiang
869391a029 refactor: eliminate code duplication (DRY principle) (#211)
## Problem Solved
Previous refactoring added 105 lines (1476→1581) by extracting code into separate files without eliminating duplication. This refactor focuses on reducing code size through deduplication while maintaining file separation for maintainability.

## Summary
- Reduced total lines from 1581 to 1519 (-62 lines, 3.9% reduction)
- Eliminated duplicate patterns using generic helpers and factory functions
- Maintained file structure for maintainability
- Zero functional changes - same behavior

### Phase 1: DRY use-quota-manager.tsx
- Created parseStorageCount() helper (eliminates 6x localStorage read duplication)
- Created createQuotaChecker() factory (consolidates 3 check function bodies)
- Created createQuotaIncrementer() factory (consolidates 3 increment function bodies)
- Result: 242→247 lines (+5 lines, but fully DRY with eliminated duplication)

### Phase 2: DRY chat-panel.tsx (1176→1109 lines, -67 lines)

#### 2.1: Extract checkAllQuotaLimits helper
- Replaced 3 occurrences of 18-line quota check blocks
- Saved 36 lines

#### 2.2: Extract sendChatMessage helper
- Replaced 3 occurrences of 21-line sendMessage+headers blocks
- Saved 42 lines

#### 2.3: Extract processFilesAndAppendContent helper
- Replaced 2 occurrences of file processing loops
- Handles PDF, text, and image files uniformly
- Async helper with optional image parts parameter
2025-12-11 14:28:02 +09:00
Dayuan Jiang
c0347dd55d fix: prevent diagram regeneration loop after successful display (#206)
- Changed sendAutomaticallyWhen to only auto-resubmit on tool errors
- Extracted logic to shouldAutoResubmit() function with JSDoc
- Added TypeScript interfaces for type safety (MessagePart, ChatMessage)
- Wrapped debug logs with DEBUG flag for production readiness
- Added TOOL_ERROR_STATE constant to avoid hardcoded strings

Problem: AI was regenerating diagrams 3+ times even after successful display
Root cause: lastAssistantMessageIsCompleteWithToolCalls auto-resubmits on both success AND error
Solution: Custom logic that only auto-resubmits on errors, stops on success
2025-12-11 09:47:30 +09:00
Biki Kalita
a047a6ff97 feat: Display AI reasoning/thinking blocks in chat interface (#152)
* feat: Add reasoning/thinking blocks display in chat interface

* feat: add multi-provider options support and replace custom reasoning UI with AI Elements

* resolve conflicting reasoning configs and correct provider-specific reasoning parameters

* try to solve conflict

* fix: simplify reasoning display and remove unnecessary dependencies

- Remove Streamdown dependency (~5MB) - reasoning is plain text only
- Fix Bedrock providerOptions merging for Claude reasoning configs
- Remove unsupported DeepSeek reasoning configuration
- Clean up unused environment variables (REASONING_BUDGET_TOKENS, REASONING_EFFORT, DEEPSEEK_REASONING_*)
- Remove dead commented code from route.ts

Reasoning blocks contain plain thinking text and don't need markdown/diagram/code rendering.

* feat: comprehensive reasoning support improvements

Major improvements:
- Auto-enable reasoning display for all supported models
- Fix provider-specific reasoning configurations
- Remove unnecessary Streamdown dependency (~5MB)
- Clean up debug logging

Provider changes:
- OpenAI: Auto-enable reasoningSummary for o1/o3/gpt-5 models
- Google: Auto-enable includeThoughts for Gemini 2.5/3 models
- Bedrock: Restrict reasoningConfig to only Claude/Nova (fixes MiniMax error)
- Ollama: Add thinking support for qwen3-like models

Other improvements:
- Remove ENABLE_REASONING toggle (always enabled)
- Fix Bedrock providerOptions merging for Claude
- Simplify reasoning component (plain text rendering)
- Clean up unused environment variables

* fix: critical bugs and documentation gaps in reasoning support

Critical fixes:
- Fix Bedrock shallow merge bug (deep merge preserves anthropicBeta + reasoningConfig)
- Add parseInt validation with parseIntSafe helper (prevents NaN errors)
- Validate all numeric env vars with min/max ranges

Documentation improvements:
- Add BEDROCK_REASONING_BUDGET_TOKENS and BEDROCK_REASONING_EFFORT to env.example
- Add OLLAMA_ENABLE_THINKING to env.example
- Update JSDoc with accurate env var list and ranges

Code cleanup:
- Remove debug console.log statements from route.ts
- Refactor duplicate providerOptions assignments

---------

Co-authored-by: Dayuan Jiang <34411969+DayuanJiang@users.noreply.github.com>
Co-authored-by: Dayuan Jiang <jdy.toh@gmail.com>
2025-12-11 00:24:43 +09:00
Dayuan Jiang
d2ba133eaf feat: add PDF and text file upload support (#205)
- Add client-side PDF text extraction using unpdf library
- Support text files (.txt, .md, .json, .csv, .py, .js, .ts, etc.)
- Add file preview with character count for PDF/text files
- Add 150k character limit for extracted content
- Highlight Paper to Diagram example with NEW badge
- Fix React hydration error by adding explicit IDs to ResizablePanelGroup
- Remove code duplication by centralizing file utilities in pdf-utils.ts
2025-12-10 21:32:35 +09:00
Dayuan Jiang
43e5993f47 fix: improve LLM diagram context awareness and image preview (#202)
- Add replaceHistoricalToolInputs to replace XML in tool calls with placeholders
- Send both previousXml and current xml so LLM can understand user's manual edits
- Update system message to mark current XML as authoritative source of truth
- Fix React StrictMode issue with blob URL cleanup in FilePreviewList
- Add unoptimized prop to Image components for blob URLs
2025-12-10 18:04:37 +09:00
try2love
5da4ef67ec feat:light/dark mode switch (#138)
Summary

- Adds browser theme detection on first visit using
prefers-color-scheme media query
- Renames localStorage key from dark-mode to
next-ai-draw-io-dark-mode for consistency with other keys
- Uses STORAGE_DIAGRAM_XML_KEY constant instead of hardcoded
string in diagram-context.tsx

Changes

app/page.tsx:
- On first visit (no saved preference), detect browser's color
scheme preference
- Update localStorage key to follow project naming convention
(next-ai-draw-io-*)

contexts/diagram-context.tsx:
- Import STORAGE_DIAGRAM_XML_KEY from chat-panel.tsx
- Replace hardcoded "next-ai-draw-io-diagram-xml" with the
constant
2025-12-10 09:21:15 +09:00
Dayuan Jiang
97ab82e027 feat: add bring-your-own-API-key support (#186)
- Add AI provider settings to config panel (provider, model, API key, base URL)
- Support 7 providers: OpenAI, Anthropic, Google, Azure, OpenRouter, DeepSeek, SiliconFlow
- Client API keys stored in localStorage, never stored on server
- Client settings override server env vars when provided
- Skip server credential validation when client provides API key
- Bypass usage limits (request/token/TPM) when using own API key
- Add /api/config endpoint for fetching usage limits
- Add privacy notices to settings dialog, about pages, and quota toast
- Add clear settings button to reset saved API keys
- Update README files (EN/CN/JA) with BYOK documentation

Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
2025-12-09 17:50:07 +09:00
dayuan.jiang
77cb10393b fix: translate image content block error to user-friendly message 2025-12-09 16:00:19 +09:00
Dayuan Jiang
967d63c57e feat: support minimax model (#185)
* feat: support minimax model with XML wrapping fix

- Add wrapWithMxFile utility to properly wrap XML for draw.io
- Fix 'Not a diagram file' error when model generates raw <root> XML
- Add supportsPromptCaching check for conditional caching
- Only enable Bedrock prompt caching for Claude models

* docs: update model mention to minimax-m2 across About pages and READMEs

- Update tooltip in chat-panel.tsx to mention minimax-m2 model change
- Update English, Chinese, and Japanese About pages with model change info
- Update English, Chinese, and Japanese READMEs with demo site model note

---------

Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
2025-12-09 15:53:59 +09:00
singledog957
95c5a75ca3 feat: Show detailed error messages instead of generic 'Internal server error' (#144) (#154)
* feat: Show detailed error messages instead of generic 'Internal server error' (#144)

* refactor: simplify error handling logic per feedback

* refactor: imported AI SDK error handler

* fix: remove unused import and expand sensitive data filter

- Remove unused NoSuchModelError import
- Add 'secret', 'password', 'credential' to sensitive data filter

---------

Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2025-12-08 20:52:18 +09:00
Dayuan Jiang
ac09f9f8f9 feat: redesign usage limits section on about pages (#173)
- Redesign usage limits card with gradient border and modern styling
- Remove emojis and combine title/subtitle on same line
- Make all 3 language pages (EN/CN/JP) consistent in design
- Update text content with exact localized wording
- Add warning triangle icon in chat panel linking to about page
- Add 'Learn more' link in quota limit toast
- Open about page links in new tab to preserve diagram state
2025-12-08 20:26:51 +09:00
Dayuan Jiang
622829b903 feat: add daily token limit with actual usage tracking (#171)
* feat: add daily token limit with actual usage tracking

- Add DAILY_TOKEN_LIMIT env var for configurable daily token limit
- Track actual tokens from Bedrock API response metadata (not estimates)
- Server sends inputTokens + cachedInputTokens + outputTokens via messageMetadata
- Client increments token count in onFinish callback with actual usage
- Add NaN guards to prevent corrupted localStorage values
- Add token limit toast notification with quota display
- Remove client-side token estimation (was blocking legitimate requests)
- Switch to js-tiktoken for client compatibility (pure JS, no WASM)

* feat: add TPM (tokens per minute) rate limiting

- Add 50k tokens/min client-side rate limit
- Track tokens per minute with automatic minute rollover
- Check TPM limit after daily limits pass
- Show toast when rate limit reached
- NaN guards for localStorage values

* feat: make TPM limit configurable via TPM_LIMIT env var

* chore: restore cache debug logs

* fix: prevent race condition in TPM tracking

checkTPMLimit was resetting TPM count to 0 when checking, which
overwrote the count saved by incrementTPMCount. Now checkTPMLimit
only reads and incrementTPMCount handles all writes.

* chore: improve TPM limit error message clarity
2025-12-08 18:56:34 +09:00
Dayuan Jiang
728dda5267 feat: add daily request limit with custom toast notification (#167)
- Add DAILY_REQUEST_LIMIT env var support in config API
- Track request count in localStorage (resets daily)
- Show friendly quota limit toast with self-host/sponsor links
- Apply limit to send, regenerate, and edit message actions
2025-12-08 14:26:01 +09:00
Dayuan Jiang
4070772733 perf: disable prefetch on About link to reduce requests (#162) 2025-12-08 11:03:29 +09:00
Biki Kalita
03db4c8096 fix(diagram): save history snapshot after edit_diagram tool execution (#155) 2025-12-08 09:44:10 +09:00
Dayuan Jiang
86420a42c6 fix: implement client-side caching for example diagrams (#150)
- Add client-side cache check in onFormSubmit to bypass API calls for example prompts
- Use findCachedResponse to match input against cached examples
- Directly set messages with cached tool response when example matches
- Hide regenerate button for cached example responses (toolCallId starts with 'cached-')
- Prevents unnecessary API calls when using example buttons

Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
2025-12-07 19:36:09 +09:00
Dayuan Jiang
0baf21fadb fix: validate XML before displaying diagram to catch duplicate IDs (#147)
- Add validation to loadDiagram in diagram-context, returns error or null
- display_diagram and edit_diagram tools now check validation result
- Return error to AI agent with state: output-error so it can retry
- Skip validation for trusted sources (localStorage, history, internal templates)
- Add debug logging for tool call inputs to diagnose Bedrock API issues
2025-12-07 14:38:15 +09:00
Dayuan Jiang
b1bc1a6dc6 feat: auto-save and restore session state (#135)
- Save and restore chat messages, XML snapshots, session ID, and diagram XML to localStorage
- Restore diagram when DrawIO becomes ready (using new onLoad callback)
- Change close protection default to false since auto-save handles persistence
- Clear localStorage when clearing chat
- Handle edge cases: undefined edit fields, empty chartXML, missing access code header
2025-12-07 01:39:09 +09:00
Dayuan Jiang
6965a54f48 feat: upgrade @ai-sdk/react to 2.0.107 and migrate to new API (#126)
- Upgrade @ai-sdk/react from 2.0.22 to 2.0.107
- Migrate from addToolResult to addToolOutput (new API)
- Add output-error state for proper error signaling to model
- Add sendAutomaticallyWhen for auto-retry on tool errors
- Add stop function ref for potential future use

Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
2025-12-07 00:40:11 +09:00
Dayuan Jiang
9f77199272 feat: add configurable close protection setting (#123)
- Add Close Protection toggle to Settings dialog
- Save setting to localStorage (default: enabled)
- Make beforeunload confirmation conditional
- Settings button now always visible in header
- Add shadcn Switch and Label components
2025-12-06 21:42:28 +09:00
Dayuan Jiang
e893bd60f9 fix: resolve biome lint errors and memory leak in file preview (#118)
- Disable noisy biome rules (noExplicitAny, useExhaustiveDependencies, etc.)
- Fix memory leak in file-preview-list.tsx with useRef pattern
- Separate unmount cleanup into dedicated useEffect
- Add ToolPartLike interface for type safety in chat-message-display
- Add accessibility attributes (role, tabIndex, onKeyDown)
- Replace autoFocus with useEffect focus pattern
- Minor syntax improvements (optional chaining, key fixes)
2025-12-06 16:18:26 +09:00
Dayuan Jiang
150eb1ff63 chore: add Biome for formatting and linting (#116)
- Add Biome as formatter and linter (replaces Prettier)
- Configure Husky + lint-staged for pre-commit hooks
- Add VS Code settings for format on save
- Ignore components/ui/ (shadcn generated code)
- Remove semicolons, use 4-space indent
- Reformat all files to new style
2025-12-06 12:46:40 +09:00
Dayuan Jiang
7205896c8c feat: add mobile layout with chat panel at bottom (#109) 2025-12-05 23:25:59 +09:00
Dayuan Jiang
4e32a094b1 fix: restore status notice indicator removed in PR #77 (#107) 2025-12-05 23:22:29 +09:00
Dayuan Jiang
3f35c52527 feat: add draw.io theme toggle between minimal and sketch (#106)
- Add toggle button in chat input area to switch between min and sketch themes
- Show warning dialog before switching (clears messages and diagram)
- Persist theme selection in localStorage
- Default theme is minimal (hides shapes sidebar)
2025-12-05 23:10:48 +09:00
Dayuan Jiang
0af5229477 feat: add markdown rendering and resizable chat panel (#104)
* feat: add markdown rendering for chat messages

- Add react-markdown and @tailwindcss/typography for markdown support
- Use prose styling for assistant message formatting
- Fix Radix ScrollArea viewport horizontal overflow issue
- Add CSS fix for viewport width constraint

* feat: add resizable chat panel

- Replace fixed width layout with react-resizable-panels
- Chat panel can be resized by dragging the handle
- Panel is collapsible with min 15% and max 50% width
- Ctrl+B keyboard shortcut still works for toggle
2025-12-05 22:42:39 +09:00
Twelveeee
3fb349fb3e clear button cant clear error msg & feat: add setting dialog and add accesscode (#77)
* fix: clear button cant clear error msg

* new: add setting dialog and add accesscode

* fix: address review feedback - dark mode, types, formatting

* feat: only show Settings button when access code is required

* refactor: rename ACCESS_CODES to ACCESS_CODE_LIST

---------

Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2025-12-05 22:09:34 +09:00
Dayuan Jiang
ed29e32ba3 feat: restore Langfuse observability integration (#103)
- Add lib/langfuse.ts with client, trace input/output, telemetry config
- Add instrumentation.ts for OpenTelemetry setup with Langfuse span processor
- Add /api/log-save endpoint for logging diagram saves
- Add /api/log-feedback endpoint for thumbs up/down feedback
- Update chat route with sessionId tracking and telemetry
- Add feedback buttons (thumbs up/down) to chat messages
- Add sessionId tracking throughout the app
- Update env.example with Langfuse configuration
- Add @langfuse/client, @langfuse/otel, @langfuse/tracing, @opentelemetry/sdk-trace-node
2025-12-05 21:15:02 +09:00
Dayuan Jiang
4cd78dc561 chore: remove complex 503 error handling code (#102)
- Remove 15s streaming timeout detection (too slow, added complexity)
- Remove status indicator (issue resolved by switching model)
- Remove streamingError state and related refs
- Simplify onFinish callback (remove 503 detection logging)
- Remove errorHandler function (use default AI SDK errors)

The real fix was switching from global.* to us.* Bedrock model.
This removes ~134 lines of unnecessary complexity.
2025-12-05 20:18:19 +09:00
Dayuan Jiang
e0c5d966e3 feat: add image upload validation with 2MB limit and max 5 files (#101)
- Add 2MB file size limit with client and server-side validation
- Add max 5 files limit per upload
- Add sonner toast library for better error notifications
- Create ErrorToast component with keyboard accessibility
- Batch multiple validation errors into single toast
- Validate file size in all upload methods (input, paste, drag-drop)
- Add server-side validation in /api/chat endpoint
2025-12-05 19:30:50 +09:00
Dayuan Jiang
57bfc9cef7 fix: update status indicator to show outage resolved (#98) 2025-12-05 18:07:25 +09:00
Dayuan Jiang
970b88612d fix: add service status indicator for ongoing issues (#95) 2025-12-05 16:46:17 +09:00
Dayuan Jiang
c805277a76 fix: enable UI retry when Bedrock returns early 503 error (#94)
- Add error prop to ChatInput to detect error state
- Update isDisabled logic to allow retry when there's an error
- Pass combined error (SDK error + streamingError) to ChatInput

When Bedrock returns 503 ServiceUnavailableException before streaming
starts, AI SDK's onError fires but status may not transition to "ready".
This fix ensures the input is re-enabled when an error occurs, allowing
users to retry their request.
2025-12-05 16:22:38 +09:00
Dayuan Jiang
95160f5a21 fix: handle Bedrock 503 streaming errors with timeout detection (#92)
- Add 15s streaming timeout to detect mid-stream stalls (e.g., Bedrock 503)
- Add stop() call to allow user retry after timeout
- Add streamingError state for timeout-detected errors
- Improve server-side error logging for empty usage detection
- Add user-friendly error messages for ServiceUnavailable and Throttling errors
2025-12-05 14:23:47 +09:00
broBinChen
563b18e8ff refactor: replace deprecated addToolResult with addToolOutput (#85)
Replaced the deprecated addToolResult API with the new addToolOutput API from ai to ensure compatibility with future versions.
2025-12-05 14:02:45 +09:00
dayuan.jiang
ff6f130f8a refactor: remove Langfuse observability integration
- Delete lib/langfuse.ts, instrumentation.ts
- Remove API routes: log-save, log-feedback
- Remove feedback buttons (thumbs up/down) from chat
- Remove sessionId tracking throughout codebase
- Remove @langfuse/*, @opentelemetry dependencies
- Clean up env.example
2025-12-05 01:30:02 +09:00
dayuan.jiang
95e8a9c0c0 fix: update chartXMLRef directly before sendMessage to avoid race condition
The React state update (setChartXML) is async, so chartXMLRef wasn't updated
when edit_diagram tool callback checked it. Now we update the ref directly
in onFormSubmit, handleRegenerate, and handleEditMessage before sending.
2025-12-05 00:54:35 +09:00
dayuan.jiang
d9568562f0 fix: use ref for chartXML to avoid stale closure in onToolCall
The onToolCall callback was capturing stale chartXML value due to
JavaScript closure. Using a ref ensures we always get the latest value.
2025-12-05 00:47:27 +09:00
dayuan.jiang
7b8bd8c621 fix: use cached chartXML for edit_diagram to avoid Vercel timeout
DrawIO iframe export was unreliable on Vercel due to network latency,
causing edit_diagram tool to hang. Now uses chartXML from context directly,
falling back to export only when no cached XML exists.
2025-12-05 00:43:21 +09:00
dayuan.jiang
d8f2c85dab feat: link user feedback and diagram saves to chat traces in Langfuse
- Update log-feedback API to find existing chat trace by sessionId and attach score to it
- Update log-save API to create span on existing chat trace instead of standalone trace
- Add thumbs up/down feedback buttons on assistant messages
- Add message regeneration and edit functionality
- Add save dialog with format selection (drawio, png, svg)
- Pass sessionId through components for Langfuse linking
2025-12-04 22:56:59 +09:00
Dayuan Jiang
3534cb13f7 refactor: extract system prompts and add extended prompt for Opus/Haiku 4.5 (#71)
- Extract system prompts to dedicated lib/system-prompts.ts module
- Add extended system prompt (~4000 tokens) for models with higher cache minimums (Opus 4.5, Haiku 4.5)
- Clean up debug logs while preserving informational and cache-related logs
- Improve code formatting and organization in chat route
2025-12-04 13:26:06 +09:00
Dayuan Jiang
fa1b02ad78 feat: integrate Langfuse for LLM observability (#66)
* feat: integrate Langfuse for LLM observability

- Add instrumentation.ts with Langfuse OpenTelemetry exporter
- Enable experimental telemetry on streamText calls
- Add instrumentationHook to Next.js config
- Install required dependencies (@vercel/otel, langfuse-vercel, etc.)

* feat: add optional Langfuse observability integration

- Add session tracking with unique sessionId per conversation
- Add user tracking via IP address (x-forwarded-for header)
- Make telemetry conditional - only enabled if LANGFUSE_PUBLIC_KEY is set
- Add environment variable validation in instrumentation.ts
- Add sessionId validation (type check + 200 char limit)
- Update env.example with Langfuse configuration docs
- Remove unused langfuse-vercel and @vercel/otel packages

* fix: remove deprecated instrumentationHook (enabled by default in Next.js 15)
2025-12-04 00:23:09 +09:00
Dayuan Jiang
39322c2793 fix: prevent duplicate history entries when edit_diagram tool is called (#64)
- Add handleExportWithoutHistory function for fetching current diagram state without saving to history
- Update onFetchChart to accept saveToHistory parameter (defaults to true)
- edit_diagram tool now fetches with saveToHistory=false since it only needs the current state
- Only the initial form submission saves to history as intended
2025-12-03 21:58:48 +09:00
Dayuan Jiang
110cccb09c feat: refresh UI with new typography and edit diff display (#63)
- Switch from Geist to Plus Jakarta Sans (body) and JetBrains Mono (code)
- Add visual diff display for edit_diagram tool showing search/replace pairs
- Update color palette to clean modern OKLCH-based scheme
- Improve chat message display with better styling and animations
- Add syntax-highlighted code blocks for XML/JSON output
- Improve scrollbar and shadow utilities
2025-12-03 21:49:34 +09:00