Commit Graph

432 Commits

Author SHA1 Message Date
Dayuan Jiang
46567cb0b8 feat: verify access code with server before saving (#128) 2025-12-07 00:21:59 +09:00
Dayuan Jiang
9f77199272 feat: add configurable close protection setting (#123)
- Add Close Protection toggle to Settings dialog
- Save setting to localStorage (default: enabled)
- Make beforeunload confirmation conditional
- Settings button now always visible in header
- Add shadcn Switch and Label components
2025-12-06 21:42:28 +09:00
dayuan.jiang
77f2569a3b chore: bump version to 0.3.0 v0.3.0 2025-12-06 19:26:26 +09:00
Dayuan Jiang
cbb92bd636 fix: set maxDuration to 60 for Vercel hobby plan (#122) 2025-12-06 18:09:30 +09:00
Dayuan Jiang
8d898d8adc fix: revert maxDuration to static value (Next.js requirement) (#121) 2025-12-06 18:04:23 +09:00
Dayuan Jiang
1e0b1ed970 feat: make maxDuration configurable via MAX_DURATION env (#120) 2025-12-06 17:47:50 +09:00
Dayuan Jiang
1d03d10ba8 docs: add CONTRIBUTING.md (#119) 2025-12-06 17:39:47 +09:00
Dayuan Jiang
e893bd60f9 fix: resolve biome lint errors and memory leak in file preview (#118)
- Disable noisy biome rules (noExplicitAny, useExhaustiveDependencies, etc.)
- Fix memory leak in file-preview-list.tsx with useRef pattern
- Separate unmount cleanup into dedicated useEffect
- Add ToolPartLike interface for type safety in chat-message-display
- Add accessibility attributes (role, tabIndex, onKeyDown)
- Replace autoFocus with useEffect focus pattern
- Minor syntax improvements (optional chaining, key fixes)
2025-12-06 16:18:26 +09:00
Dayuan Jiang
9aaf9bf31f refactor: deduplicate system prompts with two-phase composition (#117) 2025-12-06 12:58:53 +09:00
Dayuan Jiang
150eb1ff63 chore: add Biome for formatting and linting (#116)
- Add Biome as formatter and linter (replaces Prettier)
- Configure Husky + lint-staged for pre-commit hooks
- Add VS Code settings for format on save
- Ignore components/ui/ (shadcn generated code)
- Remove semicolons, use 4-space indent
- Reformat all files to new style
2025-12-06 12:46:40 +09:00
Dayuan Jiang
215a101f54 fix: revert edit_diagram tool description to original (#115) 2025-12-06 12:41:01 +09:00
Dayuan Jiang
e00938d9d3 feat: enhance system prompt with app context and dynamic model name (#114)
- Add App Context section describing the left/right panel layout
- Add App Features section with icon locations (history, theme, upload, export, clear)
- Dynamically inject model name into system prompt via {{MODEL_NAME}} placeholder
- Expand edit_diagram tool description with usage guidelines
2025-12-06 12:37:37 +09:00
Dayuan Jiang
dd27d034e2 fix: force re-render when switching between mobile/desktop layout (#112) 2025-12-05 23:45:57 +09:00
Dayuan Jiang
9e781005af fix: make button hover state darker instead of lighter (#111) 2025-12-05 23:38:24 +09:00
Dayuan Jiang
fe1aa2747e fix: add viewport meta tag for mobile layout (#110) 2025-12-05 23:34:18 +09:00
Dayuan Jiang
7205896c8c feat: add mobile layout with chat panel at bottom (#109) 2025-12-05 23:25:59 +09:00
Dayuan Jiang
4e32a094b1 fix: restore status notice indicator removed in PR #77 (#107) 2025-12-05 23:22:29 +09:00
Dayuan Jiang
96a1111654 fix: ensure markdown text in user messages is visible (#108)
The prose plugin was overriding text colors for markdown elements
(bold, headings, etc.) in user message bubbles, causing text to
blend with the dark primary background.

Added conditional styling that forces all child elements in user
messages to use text-primary-foreground color with !important to
override prose defaults.
2025-12-05 23:16:59 +09:00
Dayuan Jiang
3f35c52527 feat: add draw.io theme toggle between minimal and sketch (#106)
- Add toggle button in chat input area to switch between min and sketch themes
- Show warning dialog before switching (clears messages and diagram)
- Persist theme selection in localStorage
- Default theme is minimal (hides shapes sidebar)
2025-12-05 23:10:48 +09:00
Dayuan Jiang
0af5229477 feat: add markdown rendering and resizable chat panel (#104)
* feat: add markdown rendering for chat messages

- Add react-markdown and @tailwindcss/typography for markdown support
- Use prose styling for assistant message formatting
- Fix Radix ScrollArea viewport horizontal overflow issue
- Add CSS fix for viewport width constraint

* feat: add resizable chat panel

- Replace fixed width layout with react-resizable-panels
- Chat panel can be resized by dragging the handle
- Panel is collapsible with min 15% and max 50% width
- Ctrl+B keyboard shortcut still works for toggle
2025-12-05 22:42:39 +09:00
Twelveeee
3fb349fb3e clear button cant clear error msg & feat: add setting dialog and add accesscode (#77)
* fix: clear button cant clear error msg

* new: add setting dialog and add accesscode

* fix: address review feedback - dark mode, types, formatting

* feat: only show Settings button when access code is required

* refactor: rename ACCESS_CODES to ACCESS_CODE_LIST

---------

Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2025-12-05 22:09:34 +09:00
Dayuan Jiang
ed29e32ba3 feat: restore Langfuse observability integration (#103)
- Add lib/langfuse.ts with client, trace input/output, telemetry config
- Add instrumentation.ts for OpenTelemetry setup with Langfuse span processor
- Add /api/log-save endpoint for logging diagram saves
- Add /api/log-feedback endpoint for thumbs up/down feedback
- Update chat route with sessionId tracking and telemetry
- Add feedback buttons (thumbs up/down) to chat messages
- Add sessionId tracking throughout the app
- Update env.example with Langfuse configuration
- Add @langfuse/client, @langfuse/otel, @langfuse/tracing, @opentelemetry/sdk-trace-node
2025-12-05 21:15:02 +09:00
Dayuan Jiang
4cd78dc561 chore: remove complex 503 error handling code (#102)
- Remove 15s streaming timeout detection (too slow, added complexity)
- Remove status indicator (issue resolved by switching model)
- Remove streamingError state and related refs
- Simplify onFinish callback (remove 503 detection logging)
- Remove errorHandler function (use default AI SDK errors)

The real fix was switching from global.* to us.* Bedrock model.
This removes ~134 lines of unnecessary complexity.
2025-12-05 20:18:19 +09:00
Dayuan Jiang
e0c5d966e3 feat: add image upload validation with 2MB limit and max 5 files (#101)
- Add 2MB file size limit with client and server-side validation
- Add max 5 files limit per upload
- Add sonner toast library for better error notifications
- Create ErrorToast component with keyboard accessibility
- Batch multiple validation errors into single toast
- Validate file size in all upload methods (input, paste, drag-drop)
- Add server-side validation in /api/chat endpoint
2025-12-05 19:30:50 +09:00
Dayuan Jiang
33471d5b3a docs: add AI provider configuration guide (#100)
- Add docs/ai-providers.md with detailed setup instructions for all providers
- Update README.md, README_CN.md, README_JA.md with provider guide links
- Add model capability requirements note
- Simplify provider list in READMEs

Closes #79
2025-12-05 18:53:34 +09:00
Dayuan Jiang
3ef9908df7 feat: add confirmation dialog to prevent accidental back navigation (#99)
Addresses conflict between right-click drag and browser back gesture in
Chromium-based browsers. Shows browser confirmation dialog when user
tries to navigate away, preventing accidental page exits.

Closes #80
2025-12-05 18:42:36 +09:00
Dayuan Jiang
57bfc9cef7 fix: update status indicator to show outage resolved (#98) 2025-12-05 18:07:25 +09:00
Dayuan Jiang
0543f71c43 fix: use console.log instead of console.error for XML validation during streaming (#96) 2025-12-05 16:59:14 +09:00
Dayuan Jiang
970b88612d fix: add service status indicator for ongoing issues (#95) 2025-12-05 16:46:17 +09:00
Dayuan Jiang
c805277a76 fix: enable UI retry when Bedrock returns early 503 error (#94)
- Add error prop to ChatInput to detect error state
- Update isDisabled logic to allow retry when there's an error
- Pass combined error (SDK error + streamingError) to ChatInput

When Bedrock returns 503 ServiceUnavailableException before streaming
starts, AI SDK's onError fires but status may not transition to "ready".
This fix ensures the input is re-enabled when an error occurs, allowing
users to retry their request.
2025-12-05 16:22:38 +09:00
Dayuan Jiang
95160f5a21 fix: handle Bedrock 503 streaming errors with timeout detection (#92)
- Add 15s streaming timeout to detect mid-stream stalls (e.g., Bedrock 503)
- Add stop() call to allow user retry after timeout
- Add streamingError state for timeout-detected errors
- Improve server-side error logging for empty usage detection
- Add user-friendly error messages for ServiceUnavailable and Throttling errors
2025-12-05 14:23:47 +09:00
broBinChen
b206e16c02 fix: clear files when clicking text-only examples (#82)
Fixed an issue where files from previous examples would persist when clicking on "Animated Diagram" or "Creative Drawing" examples that don't require image uploads.
2025-12-05 14:07:14 +09:00
broBinChen
563b18e8ff refactor: replace deprecated addToolResult with addToolOutput (#85)
Replaced the deprecated addToolResult API with the new addToolOutput API from ai to ensure compatibility with future versions.
2025-12-05 14:02:45 +09:00
dayuan.jiang
2366255e8f fix: use credential provider chain for bedrock IAM role support 2025-12-05 09:19:26 +09:00
dayuan.jiang
255308f829 fix: make bedrock credentials optional for IAM role support 2025-12-05 09:11:10 +09:00
dayuan.jiang
a9493c8877 fix: write env vars to .env.production for Amplify SSR runtime 2025-12-05 09:04:54 +09:00
dayuan.jiang
a0c3db100a fix: add favicon.ico to public folder for header logo 2025-12-05 08:56:34 +09:00
dayuan.jiang
ff6f130f8a refactor: remove Langfuse observability integration
- Delete lib/langfuse.ts, instrumentation.ts
- Remove API routes: log-save, log-feedback
- Remove feedback buttons (thumbs up/down) from chat
- Remove sessionId tracking throughout codebase
- Remove @langfuse/*, @opentelemetry dependencies
- Clean up env.example
2025-12-05 01:30:02 +09:00
dayuan.jiang
562751c913 fix: disable recordInputs to prevent Langfuse media upload timeout
When images are included in chat messages, the AI SDK telemetry with
recordInputs: true sends base64 image data to Langfuse. Langfuse then
attempts to upload these images to media storage, causing 1m31s timeouts.

Setting recordInputs: false prevents this while still capturing user
text input via setTraceInput().
2025-12-05 01:14:01 +09:00
dayuan.jiang
95e8a9c0c0 fix: update chartXMLRef directly before sendMessage to avoid race condition
The React state update (setChartXML) is async, so chartXMLRef wasn't updated
when edit_diagram tool callback checked it. Now we update the ref directly
in onFormSubmit, handleRegenerate, and handleEditMessage before sending.
2025-12-05 00:54:35 +09:00
dayuan.jiang
d9568562f0 fix: use ref for chartXML to avoid stale closure in onToolCall
The onToolCall callback was capturing stale chartXML value due to
JavaScript closure. Using a ref ensures we always get the latest value.
2025-12-05 00:47:27 +09:00
dayuan.jiang
7b8bd8c621 fix: use cached chartXML for edit_diagram to avoid Vercel timeout
DrawIO iframe export was unreliable on Vercel due to network latency,
causing edit_diagram tool to hang. Now uses chartXML from context directly,
falling back to export only when no cached XML exists.
2025-12-05 00:43:21 +09:00
dayuan.jiang
46cbc3354c fix: add manual token usage reporting to Langfuse for Bedrock streaming
Bedrock streaming responses don't auto-report token usage to OpenTelemetry.
This fix manually sets span attributes (ai.usage.promptTokens, gen_ai.usage.input_tokens)
from the AI SDK onFinish callback to ensure Langfuse captures token counts.
2025-12-05 00:26:02 +09:00
dayuan.jiang
46d2d4e078 refactor: add input validation and singleton pattern for Langfuse API routes
- Add Zod schema validation for log-feedback and log-save endpoints
- Create singleton LangfuseClient to avoid per-request instantiation
- Simplify log-save to only flag trace (no XML content sent)
- Use generic error messages to prevent info leakage
2025-12-04 23:44:00 +09:00
dayuan.jiang
d8f2c85dab feat: link user feedback and diagram saves to chat traces in Langfuse
- Update log-feedback API to find existing chat trace by sessionId and attach score to it
- Update log-save API to create span on existing chat trace instead of standalone trace
- Add thumbs up/down feedback buttons on assistant messages
- Add message regeneration and edit functionality
- Add save dialog with format selection (drawio, png, svg)
- Pass sessionId through components for Langfuse linking
2025-12-04 22:56:59 +09:00
Dayuan Jiang
5f4d31e708 fix: auto-detect AI provider from configured API keys (#74)
- Remove default bedrock provider requirement
- Auto-detect provider when only one API key is configured
- Show helpful error when no keys or multiple keys without AI_PROVIDER
- Fixes #73
2025-12-04 14:13:10 +09:00
Dayuan Jiang
489b377063 chore: upgrade Next.js from 15.2.3 to 16.0.7 (#72)
- Fixes critical CVE-2025-66478 (CVSS 10.0) - RSC protocol vulnerability
- Includes Turbopack stability improvements
- Updated tsconfig.json with Next.js 16 recommended settings
2025-12-04 13:48:30 +09:00
Dayuan Jiang
3534cb13f7 refactor: extract system prompts and add extended prompt for Opus/Haiku 4.5 (#71)
- Extract system prompts to dedicated lib/system-prompts.ts module
- Add extended system prompt (~4000 tokens) for models with higher cache minimums (Opus 4.5, Haiku 4.5)
- Clean up debug logs while preserving informational and cache-related logs
- Improve code formatting and organization in chat route
2025-12-04 13:26:06 +09:00
Dayuan Jiang
9d9613a8d1 feat: add trace-level input/output to Langfuse observability (#69)
* feat: add trace-level input/output to Langfuse observability

- Add @langfuse/client and @langfuse/tracing dependencies
- Wrap POST handler with observe() for proper tracing
- Use updateActiveTrace() to set trace input, output, sessionId, userId
- Filter Next.js HTTP spans in shouldExportSpan so AI SDK spans become root traces
- Enable recordInputs/recordOutputs in experimental_telemetry

* refactor: extract Langfuse logic to separate lib/langfuse.ts module
2025-12-04 11:24:26 +09:00
Dayuan Jiang
bed04c82f8 chore: add Apache 2.0 license and update gitignore (#68)
* feat: integrate Langfuse for LLM observability

- Add instrumentation.ts with Langfuse OpenTelemetry exporter
- Enable experimental telemetry on streamText calls
- Add instrumentationHook to Next.js config
- Install required dependencies (@vercel/otel, langfuse-vercel, etc.)

* feat: add optional Langfuse observability integration

- Add session tracking with unique sessionId per conversation
- Add user tracking via IP address (x-forwarded-for header)
- Make telemetry conditional - only enabled if LANGFUSE_PUBLIC_KEY is set
- Add environment variable validation in instrumentation.ts
- Add sessionId validation (type check + 200 char limit)
- Update env.example with Langfuse configuration docs
- Remove unused langfuse-vercel and @vercel/otel packages

* fix: remove deprecated instrumentationHook (enabled by default in Next.js 15)

* chore: add Apache 2.0 license and update gitignore
2025-12-04 00:33:32 +09:00