Compare commits

...

39 Commits

Author SHA1 Message Date
Dayuan Jiang
c7e88b0711 Remove Electron settings panel (breaking change)
- Remove electron/settings/ folder
- Remove config-manager.ts and settings-window.ts
- Remove Configuration menu from app menu
- Remove config-presets IPC handlers
- Update README.md
- Clean up electron.d.ts

Note: This is a breaking change - users will lose existing presets
2025-12-25 13:58:21 +09:00
Dayuan Jiang
51858dbf5d Add deprecation notice to Electron settings panel (#403)
- Add warning banner to settings window HTML
- Add CSS styling for deprecation notice (light/dark mode)
- Direct users to use AI Model Configuration button in chat panel
2025-12-25 13:56:07 +09:00
Dayuan Jiang
3047d19238 fix: rename edit_diagram type field to operation for better model compatibility (#402)
Fixes #374 - Models were confused by the `type` field name and sent
`operation` instead. This change:

- Renames DiagramOperation.type to DiagramOperation.operation across
  all files (MCP server, web app, hooks, components, system prompts)
- Adds JSON examples in tool descriptions to show correct format
- Updates all test data to use the new field name

Affected files:
- lib/utils.ts
- app/api/chat/route.ts
- hooks/use-diagram-tool-handlers.ts
- components/chat-message-display.tsx
- lib/system-prompts.ts
- packages/mcp-server/src/diagram-operations.ts
- packages/mcp-server/src/index.ts
- scripts/test-diagram-operations.mjs

MCP server version bumped to 0.1.6
2025-12-25 13:19:04 +09:00
Dayuan Jiang
ed069afdea fix: use full IP for userId to prevent quota collision (#400)
* fix: use full IP for userId to prevent quota collision

- Remove .slice(0, 8) from base64 encoded IP
- Each IP now has unique userId (no /16 collision)
- Affects: quota tracking, Langfuse tracing

* refactor: extract getUserIdFromRequest to shared utility

- Create lib/user-id.ts with shared function
- Fix misleading 'privacy' comment (base64 is not privacy)
- Remove duplicate code from chat and log-feedback routes
2025-12-25 12:20:46 +09:00
Biki Kalita
d2e5afb298 Hide scrollbar in model selector dropdown while maintaining scroll functionality (#396)
* fix: hide vertical scrollbar in model selector while maintaining scroll functionality

* feat: add gradient shadow indicator for scrollable content

---------

Co-authored-by: Dayuan Jiang <jdy.toh@gmail.com>
2025-12-25 08:58:04 +09:00
Biki Kalita
d3fb2314ee fix: add scrollable model list with visible scrollbar in AI Model Configuration dialog (#395) 2025-12-24 19:06:11 +09:00
Dayuan Jiang
447bb30745 refactor: extract diagram tool handlers to dedicated hook (#389)
- Create useDiagramToolHandlers hook for display_diagram, edit_diagram, append_diagram
- Remove ~300 lines from chat-panel.tsx
- Remove unused stopRef
- Gate debug console.log statements with DEBUG constant
2025-12-24 12:28:59 +09:00
Dayuan Jiang
63398d9f34 fix: filter Langfuse traces to only export chat and AI SDK spans (#392)
Switch from blocklist to whitelist approach - only export spans named
'chat' or starting with 'ai.' to filter out Next.js infrastructure noise
(HEAD, fetch, POST requests).
2025-12-24 10:47:34 +09:00
Dayuan Jiang
82f4deb23a fix: quota daily reset bug and add timezone support (#390)
- Fixed bug where daily quota counts weren't resetting on new day
  (if_not_exists only works for missing attributes, not day changes)
- Changed to two-phase approach: reset if new day, then increment
- Added QUOTA_TIMEZONE env var for local midnight reset (e.g., Asia/Tokyo)
- Added timezone validation with UTC fallback
2025-12-24 10:34:54 +09:00
Dayuan Jiang
1fab261cd0 refactor: extract dev XML streaming simulator to separate component (#388)
- Move DEV_XML_PRESETS constants to new file
- Create DevXmlSimulator component with all simulator logic
- Add preset dropdown with 5 test cases including HTML escape test
- Set default interval to 1ms and chunk size to 10 chars
- Simplify chat-panel.tsx by removing ~130 lines of inline code
2025-12-24 09:52:50 +09:00
Dayuan Jiang
7a4a04c263 fix: remove unused partialXmlRef prop from ChatMessageDisplay (#387) 2025-12-24 09:37:32 +09:00
Dayuan Jiang
0d2e7a7ad6 fix: escape HTML in XML attribute values to prevent parse errors (#386)
- Add HTML escaping (<, >) in convertToLegalXml for attribute values
- Update isMxCellXmlComplete to handle any LLM provider's wrapper tags
- Add wrapper tag stripping in wrapWithMxFile for DeepSeek/Anthropic tags
- Update autoFixXml to escape both < and > in attribute values

Fixes 'Malformed XML detected in final output' error when AI generates
diagrams with HTML content in value attributes like <b>Title</b>.
2025-12-24 09:31:54 +09:00
Dayuan Jiang
3218ccc909 feat: add dev XML streaming simulator for UI debugging (#385) 2025-12-24 09:29:29 +09:00
Dayuan Jiang
d3be96de79 refactor: redesign config panels with refined minimal aesthetic (#384)
- Add CSS design system tokens (surfaces, borders, animations) to globals.css
- Update dialog.tsx with rounded-2xl, shadow-dialog, refined close button
- Enhance input.tsx with rounded-xl and refined focus states
- Refactor settings-dialog with SettingItem pattern and consistent control sizing
- Refactor model-config-dialog with ConfigSection/ConfigCard helpers
- Replace emerald-* classes with success design tokens
- Remove unused ValidationButton component and scrollState
2025-12-23 21:52:04 +09:00
Dayuan Jiang
b2dfd5b890 fix: display correct quota values in limit toast (#383)
- Parse JSON error response from server to get actual used/limit values
- Previously showed 0/0 due to race condition (config fetch vs error)
- AI SDK puts full response body in error.message for non-OK responses
- Updated all quota toasts (request, token, TPM) to use server values
2025-12-23 21:08:21 +09:00
Dayuan Jiang
72d647de7a fix: use Chat Completions API for OpenAI-compatible proxies (#382)
Third-party OpenAI-compatible proxies typically don't support the
/responses endpoint. Use .chat() for custom baseURLs while keeping
Responses API for official OpenAI to preserve reasoning model support.

Fixes #377
2025-12-23 20:29:48 +09:00
Dayuan Jiang
c6b0e5ac62 fix: use totalUsage with all token types for accurate quota tracking (#381)
The onFinish callback's 'usage' only contains the final step's tokens,
which underreports usage for multi-step tool calls (like diagram generation).
Changed to 'totalUsage' which provides cumulative counts across all steps.

Include all 4 token types for accurate counting:
1. inputTokens - non-cached input tokens
2. outputTokens - generated output tokens
3. cachedInputTokens - tokens read from prompt cache
4. inputTokenDetails.cacheWriteTokens - tokens written to cache

Tested locally:
- Request 1 (cache write): 334 + 62 + 0 + 6671 = 7,067 tokens
- Request 2 (cache read): 334 + 184 + 6551 + 120 = 7,189 tokens
- DynamoDB total: 14,256 ✓
2025-12-23 20:19:28 +09:00
Dayuan Jiang
7de192e1fa fix: enable progressive diagram rendering during streaming (#380)
- Add extractCompleteMxCells() to extract only complete mxCell elements from partial XML
- Remove useEffect cleanup that was killing debounce timeouts on every re-render
- Wrap XML in <root> tags for proper DOMParser validation

Previously, diagrams only rendered after ALL XML finished streaming because:
1. useEffect cleanup cleared the 150ms debounce timeout on every message change
2. DOMParser rejected partial XML like '<mxCell id="2" value="...' (incomplete)

Now each complete mxCell renders progressively as it finishes streaming.
2025-12-23 18:54:03 +09:00
Dayuan Jiang
97ae9395cd feat: add server-side quota tracking with DynamoDB (#379)
- Add dynamo-quota-manager.ts for atomic quota checks using ConditionExpression
- Enforce daily request limit, daily token limit, and TPM limit
- Return 429 with quota details (type, used, limit) when exceeded
- Quota is opt-in: only enabled when DYNAMODB_QUOTA_TABLE env var is set
- Remove client-side quota enforcement (server is now source of truth)
- Simplify use-quota-manager.tsx to only display toasts
- Add @aws-sdk/client-dynamodb dependency
2025-12-23 18:36:27 +09:00
Dayuan Jiang
5ec05eb100 refactor: simplify Langfuse integration with AI SDK 6 (#375)
- Remove manual token attribute setting (AI SDK 6 telemetry auto-reports)
- Use totalTokens directly instead of inputTokens + outputTokens calculation
- Fix sessionId bug in log-save/log-feedback (prevents wrong trace attachment)
- Hash IP addresses for privacy instead of storing raw IPs
- Fix isLangfuseEnabled() to check both keys for consistency
2025-12-23 16:26:45 +09:00
Dayuan Jiang
9aec7eda79 fix: add continuation retry limit for truncated diagrams (#372)
Previously, continuation mode (for truncated XML) had unlimited client-side
retries, relying only on server stepCountIs(5) limit. This could cause
excessive API calls (495 observed) when XML truncation kept occurring.

Added MAX_CONTINUATION_RETRY_COUNT=2 to limit continuation attempts:
- After 2 failed continuation attempts, shows error toast and stops
- Resets on successful completion or user-initiated message
- Also resets when quota limits are hit
2025-12-23 14:17:06 +09:00
Dayuan Jiang
a0fbc0ad33 fix: use last user message for Langfuse trace input (#371)
In multi-step tool flows, messages array contains assistant messages
from previous steps. Using messages[messages.length - 1] would record
the assistant's response as trace input instead of the user's question.
2025-12-23 13:43:28 +09:00
Dayuan Jiang
0385c45a10 fix: OpenAI reasoning/thinking blocks not showing (#370)
- Use Responses API instead of Chat Completions API for OpenAI
  (.chat() -> default call) to support reasoning events
- Add o4 to reasoning model detection
- Change default reasoningSummary from 'detailed' to 'auto'
  (not all models support 'detailed')
- Update types to match AI SDK: 'auto' | 'detailed'
2025-12-23 13:38:50 +09:00
Dayuan Jiang
5262b7bfb2 chore: upgrade AI SDK to v6.0.1 (#369)
- Upgrade ai package from ^5.0.89 to ^6.0.1
- Upgrade @ai-sdk/* provider packages to latest v3/v4
- Update convertToModelMessages call to async (new API)
- Fix usage.cachedInputTokens to usage.inputTokenDetails?.cacheReadTokens
2025-12-23 13:31:42 +09:00
Dayuan Jiang
8cb7494d16 feat(i18n): add translations for model configuration UI (#368)
- Add ~40 new translation keys for model-config-dialog and model-selector
- Support English, Chinese, and Japanese translations
- Replace all hardcoded strings with dictionary lookups
2025-12-23 11:42:27 +09:00
Dayuan Jiang
98625dd72a docs: update about page model info to Haiku 4.5 (#367) 2025-12-23 10:22:31 +09:00
Dayuan Jiang
b5734aa5e1 chore: hide notice icon from header (#366) 2025-12-23 10:08:14 +09:00
Dayuan Jiang
87cdc53665 fix: improve Langfuse span filter to exclude all Next.js infrastructure traces (#365)
* debug: add log to verify instrumentation initialization

* fix: improve Langfuse span filter to exclude all Next.js infrastructure traces
2025-12-23 09:47:23 +09:00
Dayuan Jiang
b4fc259de8 chore: bump version to 0.4.6 (#364)
* chore: bump version to 0.4.6

* style: auto-format with Biome

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-12-23 09:09:39 +09:00
Dayuan Jiang
28f9a81e7b chore: add build-time arg for showing About and Notice (#360) 2025-12-23 01:06:42 +09:00
Dayuan Jiang
0f67884ead fix: include instrumentation.ts in standalone build for Langfuse (#359)
Add outputFileTracingIncludes to next.config.ts to ensure instrumentation.ts
is included in standalone builds (required for App Runner deployment)
2025-12-23 01:03:11 +09:00
Dayuan Jiang
3521495ead chore: conditionally show about and notice based on env var (#358) 2025-12-23 00:32:22 +09:00
Dayuan Jiang
6446454cd7 fix: add SSRF protection to validate-model endpoint (#357)
Block private IPs, localhost, cloud metadata endpoints (169.254.169.254),
and internal hostnames in custom baseUrl parameter to prevent server-side
request forgery attacks.
2025-12-23 00:26:01 +09:00
Biki Kalita
84959637db Support subdirectory deployment and fix API path handling (#311)
* feat: support subdirectory deployment (NEXT_PUBLIC_BASE_PATH)

* removed unwanted check and fix favicon issue

* Use getAssetUrl for manifest assets to avoid undefined NEXT_PUBLIC_BASE_PATH

* Add validation warning for NEXT_PUBLIC_BASE_PATH format

---------

Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2025-12-22 23:28:55 +09:00
pointerhacker
9e9ea10beb fix:feature/sglang-provider (#302)
Co-authored-by: zhaochaojin <zhaochaojin@didiglobal.com>
Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2025-12-22 23:13:45 +09:00
Biki Kalita
deae5c2c38 Fix: Localize TPM rate-limit toast via i18n (#353)
* TMP error toast hardcoded english fixed

* fix: correct JA/ZH translations to use tokens instead of requests

---------

Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2025-12-22 23:00:20 +09:00
Twelveeee
6e2d98e52d move Language Selector into SettingDialog (#352)
* fix:custom model setting bug

* refactor: consolidate aiProvider checks for cleaner code

* fix:Integrated the language selection option into the `SettingsDialog`

* fix:useSearchParams() should be wrapped in a suspense boundary at page

* fix: improve semantic HTML and maintainability

- Replace nested button>a with proper anchor element for GitHub link
- Use i18n.locales.map() with LANGUAGE_LABELS for language options

---------

Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2025-12-22 22:54:25 +09:00
Dayuan Jiang
85cb441e26 feat: multi-provider model configuration with UI/UX improvements (#355)
* feat: add multi-provider model configuration

- Add model config dialog for managing multiple AI providers
- Support for OpenAI, Anthropic, Google, Azure, Bedrock, OpenRouter, DeepSeek, SiliconFlow, Ollama, and AI Gateway
- Add model selector dropdown in chat panel header
- Add API key validation endpoint
- Add custom model ID input with keyboard navigation
- Fix hover highlight in Command component
- Add suggested models for each provider including latest Claude 4.5 series
- Store configuration locally in browser

* feat: improve model config UI and move selector to chat input

- Move model selector from header to chat input (left of send button)
- Add per-model validation status (queued, running, valid, invalid)
- Filter model selector to only show verified models
- Add editable model IDs in config dialog
- Add custom model input field alongside suggested models dropdown
- Fix hover states on provider buttons and select triggers
- Update OpenAI suggested models with GPT-5 series
- Add alert-dialog component for delete confirmation

* refactor: revert shadcn component changes, apply hover fix at usage site

* feat: add AWS credentials support for Bedrock provider

- Add AWS Access Key ID, Secret Access Key, Region fields for Bedrock
- Show different credential fields based on provider type
- Update validation API to handle Bedrock with AWS credentials
- Add region selector with common AWS regions

* fix: reset Test button after validation completes

* fix: reset validation button to Test after success

* fix: complete bedrock support and UI/UX improvements

- Add bedrock to ALLOWED_CLIENT_PROVIDERS for client credentials
- Pass AWS credentials through full chain (headers → API → provider)
- Replace non-existent GPT-5 models with real ones (o1, o3-mini)
- Add accessibility: aria-labels, focus-visible rings, inline errors
- Add more AWS regions (Ohio, London, Paris, Mumbai, Seoul, São Paulo)
- Fix setTimeout cleanup with useRef on component unmount
- Fix TypeScript type consistency in getSelectedAIConfig fallback

* chore: remove unused code

- Remove unused setAccessCodeRequired state in chat-panel.tsx
- Remove unused getSelectedModel export in model-config.ts

* fix: UI/UX improvements for model configuration dialog

- Add gradient header styling with icon badge
- Change Configuration section icon from Key to Settings2
- Add duplicate model detection with warning banner and inline removal
- Filter out already-added models from suggestions dropdown
- Add type-to-confirm for deleting providers with 3+ models
- Enhance delete confirmation dialog with warning icon
- Improve model selector discoverability (show model name + chevron)
- Add truncation for long model names with title tooltip
- Remove AI provider settings from Settings dialog (now in Model Config)
- Extract ValidationButton into reusable component

* fix: prevent duplicate model IDs within same provider

- Block adding model if ID already exists in provider
- Block editing model ID to match existing model in provider

* fix: improve duplicate model ID notifications

- Add toast notification when trying to add duplicate model
- Allow free typing when editing model ID, validate on blur
- Show warning toast instead of blocking input

* fix: improve duplicate model validation UX in config dialog

- Add inline error display for duplicate model IDs
- Show red border on input when error exists
- Validate on blur with shake animation for edit errors
- Prevent saving empty model names
- Clear errors when user starts typing
- Simplify error styling (small red text, no heavy chips)
2025-12-22 22:36:36 +09:00
Dayuan Jiang
b088a0653e chore: update app icons with new diagram hierarchy design (#350) 2025-12-22 13:24:08 +09:00
69 changed files with 7535 additions and 3535 deletions

View File

@@ -63,6 +63,8 @@ jobs:
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: linux/amd64,linux/arm64
build-args: |
NEXT_PUBLIC_SHOW_ABOUT_AND_NOTICE=true
# Push to AWS ECR for App Runner auto-deploy
- name: Configure AWS credentials

View File

@@ -26,6 +26,10 @@ ENV NEXT_TELEMETRY_DISABLED=1
ARG NEXT_PUBLIC_DRAWIO_BASE_URL=https://embed.diagrams.net
ENV NEXT_PUBLIC_DRAWIO_BASE_URL=${NEXT_PUBLIC_DRAWIO_BASE_URL}
# Build-time argument to show About link and Notice icon
ARG NEXT_PUBLIC_SHOW_ABOUT_AND_NOTICE=false
ENV NEXT_PUBLIC_SHOW_ABOUT_AND_NOTICE=${NEXT_PUBLIC_SHOW_ABOUT_AND_NOTICE}
# Build Next.js application (standalone mode)
RUN npm run build

View File

@@ -147,14 +147,13 @@ Download the native desktop app for your platform from the [Releases page](https
| Linux | `.AppImage` or `.deb` (x64 & ARM64) |
**Features:**
- **Secure API key storage**: Credentials encrypted using OS keychain
- **Configuration presets**: Save and switch between AI providers via menu
- **Native file dialogs**: Open/save `.drawio` files directly
- **Offline capable**: Works without internet after first launch
- **Built-in settings**: Configure AI providers directly in the app
**Quick Setup:**
1. Download and install for your platform
2. Open the app → **Menu → Configuration → Manage Presets**
2. Click the settings icon in the chat panel
3. Add your AI provider credentials
4. Start creating diagrams!

View File

@@ -117,9 +117,9 @@ export default function AboutCN() {
(TPS/TPM)
</p>
<p>
使 Claude {" "}
使 Opus 4.5 {" "}
<span className="font-semibold text-amber-700">
minimax-m2
Haiku 4.5
</span>
</p>

View File

@@ -126,9 +126,9 @@ export default function AboutJA() {
</p>
<p>
Claude {" "}
Opus 4.5 {" "}
<span className="font-semibold text-amber-700">
minimax-m2
Haiku 4.5
</span>{" "}
</p>

View File

@@ -129,9 +129,9 @@ export default function About() {
</p>
<p>
Due to the high usage, I have changed the
model from Claude to{" "}
model from Opus 4.5 to{" "}
<span className="font-semibold text-amber-700">
minimax-m2
Haiku 4.5
</span>
, which is more cost-effective.
</p>

View File

@@ -14,6 +14,11 @@ import path from "path"
import { z } from "zod"
import { getAIModel, supportsPromptCaching } from "@/lib/ai-providers"
import { findCachedResponse } from "@/lib/cached-responses"
import {
checkAndIncrementRequest,
isQuotaEnabled,
recordTokenUsage,
} from "@/lib/dynamo-quota-manager"
import {
getTelemetryConfig,
setTraceInput,
@@ -21,6 +26,7 @@ import {
wrapWithObserve,
} from "@/lib/langfuse"
import { getSystemPrompt } from "@/lib/system-prompts"
import { getUserIdFromRequest } from "@/lib/user-id"
export const maxDuration = 120
@@ -162,9 +168,8 @@ async function handleChatRequest(req: Request): Promise<Response> {
const { messages, xml, previousXml, sessionId } = await req.json()
// Get user IP for Langfuse tracking
const forwardedFor = req.headers.get("x-forwarded-for")
const userId = forwardedFor?.split(",")[0]?.trim() || "anonymous"
// Get user ID for Langfuse tracking and quota
const userId = getUserIdFromRequest(req)
// Validate sessionId for Langfuse (must be string, max 200 chars)
const validSessionId =
@@ -173,9 +178,12 @@ async function handleChatRequest(req: Request): Promise<Response> {
: undefined
// Extract user input text for Langfuse trace
const lastMessage = messages[messages.length - 1]
// Find the last USER message, not just the last message (which could be assistant in multi-step tool flows)
const lastUserMessage = [...messages]
.reverse()
.find((m: any) => m.role === "user")
const userInputText =
lastMessage?.parts?.find((p: any) => p.type === "text")?.text || ""
lastUserMessage?.parts?.find((p: any) => p.type === "text")?.text || ""
// Update Langfuse trace with input, session, and user
setTraceInput({
@@ -184,6 +192,33 @@ async function handleChatRequest(req: Request): Promise<Response> {
userId: userId,
})
// === SERVER-SIDE QUOTA CHECK START ===
// Quota is opt-in: only enabled when DYNAMODB_QUOTA_TABLE env var is set
const hasOwnApiKey = !!(
req.headers.get("x-ai-provider") && req.headers.get("x-ai-api-key")
)
// Skip quota check if: quota disabled, user has own API key, or is anonymous
if (isQuotaEnabled() && !hasOwnApiKey && userId !== "anonymous") {
const quotaCheck = await checkAndIncrementRequest(userId, {
requests: Number(process.env.DAILY_REQUEST_LIMIT) || 10,
tokens: Number(process.env.DAILY_TOKEN_LIMIT) || 200000,
tpm: Number(process.env.TPM_LIMIT) || 20000,
})
if (!quotaCheck.allowed) {
return Response.json(
{
error: quotaCheck.error,
type: quotaCheck.type,
used: quotaCheck.used,
limit: quotaCheck.limit,
},
{ status: 429 },
)
}
}
// === SERVER-SIDE QUOTA CHECK END ===
// === FILE VALIDATION START ===
const fileValidation = validateFileParts(messages)
if (!fileValidation.valid) {
@@ -214,6 +249,11 @@ async function handleChatRequest(req: Request): Promise<Response> {
baseUrl: req.headers.get("x-ai-base-url"),
apiKey: req.headers.get("x-ai-api-key"),
modelId: req.headers.get("x-ai-model"),
// AWS Bedrock credentials
awsAccessKeyId: req.headers.get("x-aws-access-key-id"),
awsSecretAccessKey: req.headers.get("x-aws-secret-access-key"),
awsRegion: req.headers.get("x-aws-region"),
awsSessionToken: req.headers.get("x-aws-session-token"),
}
// Read minimal style preference from header
@@ -232,9 +272,10 @@ async function handleChatRequest(req: Request): Promise<Response> {
// Get the appropriate system prompt based on model (extended for Opus/Haiku 4.5)
const systemMessage = getSystemPrompt(modelId, minimalStyle)
// Extract file parts (images) from the last message
// Extract file parts (images) from the last user message
const fileParts =
lastMessage.parts?.filter((part: any) => part.type === "file") || []
lastUserMessage?.parts?.filter((part: any) => part.type === "file") ||
[]
// User input only - XML is now in a separate cached system message
const formattedUserInput = `User input:
@@ -243,7 +284,7 @@ ${userInputText}
"""`
// Convert UIMessages to ModelMessages and add system message
const modelMessages = convertToModelMessages(messages)
const modelMessages = await convertToModelMessages(messages)
// DEBUG: Log incoming messages structure
console.log("[route.ts] Incoming messages count:", messages.length)
@@ -497,12 +538,26 @@ ${userInputText}
userId,
}),
}),
onFinish: ({ text, usage }) => {
// Pass usage to Langfuse (Bedrock streaming doesn't auto-report tokens to telemetry)
setTraceOutput(text, {
promptTokens: usage?.inputTokens,
completionTokens: usage?.outputTokens,
})
onFinish: ({ text, totalUsage }) => {
// AI SDK 6 telemetry auto-reports token usage on its spans
setTraceOutput(text)
// Record token usage for server-side quota tracking (if enabled)
// Use totalUsage (cumulative across all steps) instead of usage (final step only)
// Include all 4 token types: input, output, cache read, cache write
if (
isQuotaEnabled() &&
!hasOwnApiKey &&
userId !== "anonymous" &&
totalUsage
) {
const totalTokens =
(totalUsage.inputTokens || 0) +
(totalUsage.outputTokens || 0) +
(totalUsage.cachedInputTokens || 0) +
(totalUsage.inputTokenDetails?.cacheWriteTokens || 0)
recordTokenUsage(userId, totalTokens)
}
},
tools: {
// Client-side tool that will be executed on the client
@@ -554,14 +609,22 @@ Operations:
For update/add, new_xml must be a complete mxCell element including mxGeometry.
⚠️ JSON ESCAPING: Every " inside new_xml MUST be escaped as \\". Example: id=\\"5\\" value=\\"Label\\"`,
⚠️ JSON ESCAPING: Every " inside new_xml MUST be escaped as \\". Example: id=\\"5\\" value=\\"Label\\"
Example - Add a rectangle:
{"operations": [{"operation": "add", "cell_id": "rect-1", "new_xml": "<mxCell id=\\"rect-1\\" value=\\"Hello\\" style=\\"rounded=0;\\" vertex=\\"1\\" parent=\\"1\\"><mxGeometry x=\\"100\\" y=\\"100\\" width=\\"120\\" height=\\"60\\" as=\\"geometry\\"/></mxCell>"}]}
Example - Delete a cell:
{"operations": [{"operation": "delete", "cell_id": "rect-1"}]}`,
inputSchema: z.object({
operations: z
.array(
z.object({
type: z
operation: z
.enum(["update", "add", "delete"])
.describe("Operation type"),
.describe(
"Operation to perform: add, update, or delete",
),
cell_id: z
.string()
.describe(
@@ -672,19 +735,9 @@ Call this tool to get shape names and usage syntax for a specific library.`,
messageMetadata: ({ part }) => {
if (part.type === "finish") {
const usage = (part as any).totalUsage
if (!usage) {
console.warn(
"[messageMetadata] No usage data in finish part",
)
return undefined
}
// Total input = non-cached + cached (these are separate counts)
// Note: cacheWriteInputTokens is not available on finish part
const totalInputTokens =
(usage.inputTokens ?? 0) + (usage.cachedInputTokens ?? 0)
// AI SDK 6 provides totalTokens directly
return {
inputTokens: totalInputTokens,
outputTokens: usage.outputTokens ?? 0,
totalTokens: usage?.totalTokens ?? 0,
finishReason: (part as any).finishReason,
}
}

View File

@@ -1,6 +1,7 @@
import { randomUUID } from "crypto"
import { z } from "zod"
import { getLangfuseClient } from "@/lib/langfuse"
import { getUserIdFromRequest } from "@/lib/user-id"
const feedbackSchema = z.object({
messageId: z.string().min(1).max(200),
@@ -27,9 +28,13 @@ export async function POST(req: Request) {
const { messageId, feedback, sessionId } = data
// Get user IP for tracking
const forwardedFor = req.headers.get("x-forwarded-for")
const userId = forwardedFor?.split(",")[0]?.trim() || "anonymous"
// Skip logging if no sessionId - prevents attaching to wrong user's trace
if (!sessionId) {
return Response.json({ success: true, logged: false })
}
// Get user ID for tracking
const userId = getUserIdFromRequest(req)
try {
// Find the most recent chat trace for this session to attach the score to

View File

@@ -27,6 +27,11 @@ export async function POST(req: Request) {
const { filename, format, sessionId } = data
// Skip logging if no sessionId - prevents attaching to wrong user's trace
if (!sessionId) {
return Response.json({ success: true, logged: false })
}
try {
const timestamp = new Date().toISOString()

View File

@@ -0,0 +1,281 @@
import { createAmazonBedrock } from "@ai-sdk/amazon-bedrock"
import { createAnthropic } from "@ai-sdk/anthropic"
import { createDeepSeek, deepseek } from "@ai-sdk/deepseek"
import { createGateway } from "@ai-sdk/gateway"
import { createGoogleGenerativeAI } from "@ai-sdk/google"
import { createOpenAI } from "@ai-sdk/openai"
import { createOpenRouter } from "@openrouter/ai-sdk-provider"
import { generateText } from "ai"
import { NextResponse } from "next/server"
import { createOllama } from "ollama-ai-provider-v2"
export const runtime = "nodejs"
/**
* SECURITY: Check if URL points to private/internal network (SSRF protection)
* Blocks: localhost, private IPs, link-local, AWS metadata service
*/
function isPrivateUrl(urlString: string): boolean {
try {
const url = new URL(urlString)
const hostname = url.hostname.toLowerCase()
// Block localhost
if (
hostname === "localhost" ||
hostname === "127.0.0.1" ||
hostname === "::1"
) {
return true
}
// Block AWS/cloud metadata endpoints
if (
hostname === "169.254.169.254" ||
hostname === "metadata.google.internal"
) {
return true
}
// Check for private IPv4 ranges
const ipv4Match = hostname.match(
/^(\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})$/,
)
if (ipv4Match) {
const [, a, b] = ipv4Match.map(Number)
// 10.0.0.0/8
if (a === 10) return true
// 172.16.0.0/12
if (a === 172 && b >= 16 && b <= 31) return true
// 192.168.0.0/16
if (a === 192 && b === 168) return true
// 169.254.0.0/16 (link-local)
if (a === 169 && b === 254) return true
// 127.0.0.0/8 (loopback)
if (a === 127) return true
}
// Block common internal hostnames
if (
hostname.endsWith(".local") ||
hostname.endsWith(".internal") ||
hostname.endsWith(".localhost")
) {
return true
}
return false
} catch {
// Invalid URL - block it
return true
}
}
interface ValidateRequest {
provider: string
apiKey: string
baseUrl?: string
modelId: string
// AWS Bedrock specific
awsAccessKeyId?: string
awsSecretAccessKey?: string
awsRegion?: string
}
export async function POST(req: Request) {
try {
const body: ValidateRequest = await req.json()
const {
provider,
apiKey,
baseUrl,
modelId,
awsAccessKeyId,
awsSecretAccessKey,
awsRegion,
} = body
if (!provider || !modelId) {
return NextResponse.json(
{ valid: false, error: "Provider and model ID are required" },
{ status: 400 },
)
}
// SECURITY: Block SSRF attacks via custom baseUrl
if (baseUrl && isPrivateUrl(baseUrl)) {
return NextResponse.json(
{ valid: false, error: "Invalid base URL" },
{ status: 400 },
)
}
// Validate credentials based on provider
if (provider === "bedrock") {
if (!awsAccessKeyId || !awsSecretAccessKey || !awsRegion) {
return NextResponse.json(
{
valid: false,
error: "AWS credentials (Access Key ID, Secret Access Key, Region) are required",
},
{ status: 400 },
)
}
} else if (provider !== "ollama" && !apiKey) {
return NextResponse.json(
{ valid: false, error: "API key is required" },
{ status: 400 },
)
}
let model: any
switch (provider) {
case "openai": {
const openai = createOpenAI({
apiKey,
...(baseUrl && { baseURL: baseUrl }),
})
model = openai.chat(modelId)
break
}
case "anthropic": {
const anthropic = createAnthropic({
apiKey,
baseURL: baseUrl || "https://api.anthropic.com/v1",
})
model = anthropic(modelId)
break
}
case "google": {
const google = createGoogleGenerativeAI({
apiKey,
...(baseUrl && { baseURL: baseUrl }),
})
model = google(modelId)
break
}
case "azure": {
const azure = createOpenAI({
apiKey,
baseURL: baseUrl,
})
model = azure.chat(modelId)
break
}
case "bedrock": {
const bedrock = createAmazonBedrock({
accessKeyId: awsAccessKeyId,
secretAccessKey: awsSecretAccessKey,
region: awsRegion,
})
model = bedrock(modelId)
break
}
case "openrouter": {
const openrouter = createOpenRouter({
apiKey,
...(baseUrl && { baseURL: baseUrl }),
})
model = openrouter(modelId)
break
}
case "deepseek": {
if (baseUrl || apiKey) {
const ds = createDeepSeek({
apiKey,
...(baseUrl && { baseURL: baseUrl }),
})
model = ds(modelId)
} else {
model = deepseek(modelId)
}
break
}
case "siliconflow": {
const sf = createOpenAI({
apiKey,
baseURL: baseUrl || "https://api.siliconflow.com/v1",
})
model = sf.chat(modelId)
break
}
case "ollama": {
const ollama = createOllama({
baseURL: baseUrl || "http://localhost:11434",
})
model = ollama(modelId)
break
}
case "gateway": {
const gw = createGateway({
apiKey,
...(baseUrl && { baseURL: baseUrl }),
})
model = gw(modelId)
break
}
default:
return NextResponse.json(
{ valid: false, error: `Unknown provider: ${provider}` },
{ status: 400 },
)
}
// Make a minimal test request
const startTime = Date.now()
await generateText({
model,
prompt: "Say 'OK'",
maxOutputTokens: 20,
})
const responseTime = Date.now() - startTime
return NextResponse.json({
valid: true,
responseTime,
})
} catch (error) {
console.error("[validate-model] Error:", error)
let errorMessage = "Validation failed"
if (error instanceof Error) {
// Extract meaningful error message
if (
error.message.includes("401") ||
error.message.includes("Unauthorized")
) {
errorMessage = "Invalid API key"
} else if (
error.message.includes("404") ||
error.message.includes("not found")
) {
errorMessage = "Model not found"
} else if (
error.message.includes("429") ||
error.message.includes("rate limit")
) {
errorMessage = "Rate limited - try again later"
} else if (error.message.includes("ECONNREFUSED")) {
errorMessage = "Cannot connect to server"
} else {
errorMessage = error.message.slice(0, 100)
}
}
return NextResponse.json(
{ valid: false, error: errorMessage },
{ status: 200 }, // Return 200 so client can read error message
)
}
}

View File

@@ -144,6 +144,68 @@
--sidebar-ring: oklch(0.7 0.16 265);
}
/* ============================================
REFINED MINIMAL DESIGN SYSTEM
============================================ */
:root {
/* Surface layers for depth */
--surface-0: oklch(1 0 0);
--surface-1: oklch(0.985 0.002 240);
--surface-2: oklch(0.97 0.004 240);
--surface-elevated: oklch(1 0 0);
/* Subtle borders */
--border-subtle: oklch(0.94 0.008 260);
--border-default: oklch(0.91 0.012 260);
/* Interactive states */
--interactive-hover: oklch(0.96 0.015 260);
--interactive-active: oklch(0.93 0.02 265);
/* Success state */
--success: oklch(0.65 0.18 145);
--success-muted: oklch(0.95 0.03 145);
/* Animation timing */
--duration-fast: 120ms;
--duration-normal: 200ms;
--duration-slow: 300ms;
--ease-out: cubic-bezier(0.16, 1, 0.3, 1);
--ease-in-out: cubic-bezier(0.4, 0, 0.2, 1);
--ease-spring: cubic-bezier(0.34, 1.56, 0.64, 1);
}
.dark {
--surface-0: oklch(0.15 0.015 260);
--surface-1: oklch(0.18 0.015 260);
--surface-2: oklch(0.22 0.015 260);
--surface-elevated: oklch(0.25 0.015 260);
--border-subtle: oklch(0.25 0.012 260);
--border-default: oklch(0.3 0.015 260);
--interactive-hover: oklch(0.25 0.02 265);
--interactive-active: oklch(0.3 0.025 270);
--success: oklch(0.7 0.16 145);
--success-muted: oklch(0.25 0.04 145);
}
/* Expose surface colors to Tailwind */
@theme inline {
--color-surface-0: var(--surface-0);
--color-surface-1: var(--surface-1);
--color-surface-2: var(--surface-2);
--color-surface-elevated: var(--surface-elevated);
--color-border-subtle: var(--border-subtle);
--color-border-default: var(--border-default);
--color-interactive-hover: var(--interactive-hover);
--color-interactive-active: var(--interactive-active);
--color-success: var(--success);
--color-success-muted: var(--success-muted);
}
@layer base {
* {
@apply border-border outline-ring/50;
@@ -257,3 +319,83 @@
-webkit-text-fill-color: transparent;
background-clip: text;
}
/* ============================================
REFINED DIALOG STYLES
============================================ */
/* Refined dialog shadow - multi-layer soft shadow */
.shadow-dialog {
box-shadow:
0 0 0 1px oklch(0 0 0 / 0.03),
0 2px 4px oklch(0 0 0 / 0.02),
0 12px 24px oklch(0 0 0 / 0.06),
0 24px 48px oklch(0 0 0 / 0.04);
}
.dark .shadow-dialog {
box-shadow:
0 0 0 1px oklch(1 0 0 / 0.05),
0 2px 4px oklch(0 0 0 / 0.2),
0 12px 24px oklch(0 0 0 / 0.3),
0 24px 48px oklch(0 0 0 / 0.2);
}
/* Dialog animations */
@keyframes dialog-in {
from {
opacity: 0;
transform: translate(-50%, -48%) scale(0.96);
}
to {
opacity: 1;
transform: translate(-50%, -50%) scale(1);
}
}
@keyframes dialog-out {
from {
opacity: 1;
transform: translate(-50%, -50%) scale(1);
}
to {
opacity: 0;
transform: translate(-50%, -48%) scale(0.96);
}
}
.animate-dialog-in {
animation: dialog-in var(--duration-normal) var(--ease-out) forwards;
}
.animate-dialog-out {
animation: dialog-out 150ms var(--ease-out) forwards;
}
/* Check pop animation for validation success */
@keyframes check-pop {
0% {
transform: scale(0.8);
opacity: 0;
}
50% {
transform: scale(1.1);
}
100% {
transform: scale(1);
opacity: 1;
}
}
.animate-check-pop {
animation: check-pop 0.25s var(--ease-spring) forwards;
}
/* Reduced motion support */
@media (prefers-reduced-motion: reduce) {
.animate-dialog-in,
.animate-dialog-out,
.animate-check-pop {
animation: none;
}
}

View File

@@ -1,24 +1,24 @@
import type { MetadataRoute } from "next"
import { getAssetUrl } from "@/lib/base-path"
export default function manifest(): MetadataRoute.Manifest {
return {
name: "Next AI Draw.io",
short_name: "AIDraw.io",
description:
"Create AWS architecture diagrams, flowcharts, and technical diagrams using AI. Free online tool integrating draw.io with AI assistance for professional diagram creation.",
start_url: "/",
start_url: getAssetUrl("/"),
display: "standalone",
background_color: "#f9fafb",
theme_color: "#171d26",
icons: [
{
src: "/favicon-192x192.png",
src: getAssetUrl("/favicon-192x192.png"),
sizes: "192x192",
type: "image/png",
purpose: "any",
},
{
src: "/favicon-512x512.png",
src: getAssetUrl("/favicon-512x512.png"),
sizes: "512x512",
type: "image/png",
purpose: "any",

View File

@@ -0,0 +1,170 @@
import { Cloud } from "lucide-react"
import type { ComponentProps, ReactNode } from "react"
import {
Command,
CommandDialog,
CommandEmpty,
CommandGroup,
CommandInput,
CommandItem,
CommandList,
CommandSeparator,
CommandShortcut,
} from "@/components/ui/command"
import {
Dialog,
DialogContent,
DialogTitle,
DialogTrigger,
} from "@/components/ui/dialog"
import { cn } from "@/lib/utils"
export type ModelSelectorProps = ComponentProps<typeof Dialog>
export const ModelSelector = (props: ModelSelectorProps) => (
<Dialog {...props} />
)
export type ModelSelectorTriggerProps = ComponentProps<typeof DialogTrigger>
export const ModelSelectorTrigger = (props: ModelSelectorTriggerProps) => (
<DialogTrigger {...props} />
)
export type ModelSelectorContentProps = ComponentProps<typeof DialogContent> & {
title?: ReactNode
}
export const ModelSelectorContent = ({
className,
children,
title = "Model Selector",
...props
}: ModelSelectorContentProps) => (
<DialogContent className={cn("p-0", className)} {...props}>
<DialogTitle className="sr-only">{title}</DialogTitle>
<Command className="**:data-[slot=command-input-wrapper]:h-auto">
{children}
</Command>
</DialogContent>
)
export type ModelSelectorDialogProps = ComponentProps<typeof CommandDialog>
export const ModelSelectorDialog = (props: ModelSelectorDialogProps) => (
<CommandDialog {...props} />
)
export type ModelSelectorInputProps = ComponentProps<typeof CommandInput>
export const ModelSelectorInput = ({
className,
...props
}: ModelSelectorInputProps) => (
<CommandInput className={cn("h-auto py-3.5", className)} {...props} />
)
export type ModelSelectorListProps = ComponentProps<typeof CommandList>
export const ModelSelectorList = ({
className,
...props
}: ModelSelectorListProps) => (
<div className="relative">
<CommandList
className={cn(
// Hide scrollbar on all platforms
"[&::-webkit-scrollbar]:hidden [-ms-overflow-style:none] [scrollbar-width:none]",
className,
)}
{...props}
/>
{/* Bottom shadow indicator for scrollable content */}
<div className="pointer-events-none absolute bottom-0 left-0 right-0 h-12 bg-gradient-to-t from-muted/80 via-muted/40 to-transparent" />
</div>
)
export type ModelSelectorEmptyProps = ComponentProps<typeof CommandEmpty>
export const ModelSelectorEmpty = (props: ModelSelectorEmptyProps) => (
<CommandEmpty {...props} />
)
export type ModelSelectorGroupProps = ComponentProps<typeof CommandGroup>
export const ModelSelectorGroup = (props: ModelSelectorGroupProps) => (
<CommandGroup {...props} />
)
export type ModelSelectorItemProps = ComponentProps<typeof CommandItem>
export const ModelSelectorItem = (props: ModelSelectorItemProps) => (
<CommandItem {...props} />
)
export type ModelSelectorShortcutProps = ComponentProps<typeof CommandShortcut>
export const ModelSelectorShortcut = (props: ModelSelectorShortcutProps) => (
<CommandShortcut {...props} />
)
export type ModelSelectorSeparatorProps = ComponentProps<
typeof CommandSeparator
>
export const ModelSelectorSeparator = (props: ModelSelectorSeparatorProps) => (
<CommandSeparator {...props} />
)
export type ModelSelectorLogoProps = Omit<
ComponentProps<"img">,
"src" | "alt"
> & {
provider: string
}
export const ModelSelectorLogo = ({
provider,
className,
...props
}: ModelSelectorLogoProps) => {
// Use Lucide icon for bedrock since models.dev doesn't have a good AWS icon
if (provider === "amazon-bedrock") {
return <Cloud className={cn("size-4", className)} />
}
return (
<img
{...props}
alt={`${provider} logo`}
className={cn("size-4 dark:invert", className)}
height={16}
src={`https://models.dev/logos/${provider}.svg`}
width={16}
/>
)
}
export type ModelSelectorLogoGroupProps = ComponentProps<"div">
export const ModelSelectorLogoGroup = ({
className,
...props
}: ModelSelectorLogoGroupProps) => (
<div
className={cn(
"-space-x-1 flex shrink-0 items-center [&>img]:rounded-full [&>img]:bg-background [&>img]:p-px [&>img]:ring-1 dark:[&>img]:bg-foreground",
className,
)}
{...props}
/>
)
export type ModelSelectorNameProps = ComponentProps<"span">
export const ModelSelectorName = ({
className,
...props
}: ModelSelectorNameProps) => (
<span className={cn("flex-1 truncate text-left", className)} {...props} />
)

View File

@@ -9,6 +9,7 @@ import {
Zap,
} from "lucide-react"
import { useDictionary } from "@/hooks/use-dictionary"
import { getAssetUrl } from "@/lib/base-path"
interface ExampleCardProps {
icon: React.ReactNode
@@ -79,7 +80,7 @@ export default function ExamplePanel({
setInput("Replicate this flowchart.")
try {
const response = await fetch("/example.png")
const response = await fetch(getAssetUrl("/example.png"))
const blob = await response.blob()
const file = new File([blob], "example.png", { type: "image/png" })
setFiles([file])
@@ -92,7 +93,7 @@ export default function ExamplePanel({
setInput("Replicate this in aws style")
try {
const response = await fetch("/architecture.png")
const response = await fetch(getAssetUrl("/architecture.png"))
const blob = await response.blob()
const file = new File([blob], "architecture.png", {
type: "image/png",
@@ -107,7 +108,7 @@ export default function ExamplePanel({
setInput("Summarize this paper as a diagram")
try {
const response = await fetch("/chain-of-thought.txt")
const response = await fetch(getAssetUrl("/chain-of-thought.txt"))
const blob = await response.blob()
const file = new File([blob], "chain-of-thought.txt", {
type: "text/plain",

View File

@@ -14,6 +14,7 @@ import { toast } from "sonner"
import { ButtonWithTooltip } from "@/components/button-with-tooltip"
import { ErrorToast } from "@/components/error-toast"
import { HistoryDialog } from "@/components/history-dialog"
import { ModelSelector } from "@/components/model-selector"
import { ResetWarningModal } from "@/components/reset-warning-modal"
import { SaveDialog } from "@/components/save-dialog"
import { Button } from "@/components/ui/button"
@@ -28,6 +29,7 @@ import { useDiagram } from "@/contexts/diagram-context"
import { useDictionary } from "@/hooks/use-dictionary"
import { formatMessage } from "@/lib/i18n/utils"
import { isPdfFile, isTextFile } from "@/lib/pdf-utils"
import type { FlattenedModel } from "@/lib/types/model-config"
import { FilePreviewList } from "./file-preview-list"
const MAX_IMAGE_SIZE = 2 * 1024 * 1024 // 2MB
@@ -156,6 +158,11 @@ interface ChatInputProps {
error?: Error | null
minimalStyle?: boolean
onMinimalStyleChange?: (value: boolean) => void
// Model selector props
models?: FlattenedModel[]
selectedModelId?: string
onModelSelect?: (modelId: string | undefined) => void
onConfigureModels?: () => void
}
export function ChatInput({
@@ -173,6 +180,10 @@ export function ChatInput({
error = null,
minimalStyle = false,
onMinimalStyleChange = () => {},
models = [],
selectedModelId,
onModelSelect = () => {},
onConfigureModels = () => {},
}: ChatInputProps) {
const dict = useDictionary()
const {
@@ -465,6 +476,14 @@ export function ChatInput({
disabled={isDisabled}
/>
<ModelSelector
models={models}
selectedModelId={selectedModelId}
onSelect={onModelSelect}
onConfigure={onConfigureModels}
disabled={isDisabled}
/>
<div className="w-px h-5 bg-border mx-1" />
<Button

View File

@@ -27,9 +27,11 @@ import {
ReasoningTrigger,
} from "@/components/ai-elements/reasoning"
import { ScrollArea } from "@/components/ui/scroll-area"
import { getApiEndpoint } from "@/lib/base-path"
import {
applyDiagramOperations,
convertToLegalXml,
extractCompleteMxCells,
isMxCellXmlComplete,
replaceNodes,
validateAndFixXml,
@@ -38,7 +40,7 @@ import ExamplePanel from "./chat-example-panel"
import { CodeBlock } from "./code-block"
interface DiagramOperation {
type: "update" | "add" | "delete"
operation: "update" | "add" | "delete"
cell_id: string
new_xml?: string
}
@@ -51,12 +53,12 @@ function getCompleteOperations(
return operations.filter(
(op) =>
op &&
typeof op.type === "string" &&
["update", "add", "delete"].includes(op.type) &&
typeof op.operation === "string" &&
["update", "add", "delete"].includes(op.operation) &&
typeof op.cell_id === "string" &&
op.cell_id.length > 0 &&
// delete doesn't need new_xml, update/add do
(op.type === "delete" || typeof op.new_xml === "string"),
(op.operation === "delete" || typeof op.new_xml === "string"),
)
}
@@ -77,20 +79,20 @@ function OperationsDisplay({ operations }: { operations: DiagramOperation[] }) {
<div className="space-y-3">
{operations.map((op, index) => (
<div
key={`${op.type}-${op.cell_id}-${index}`}
key={`${op.operation}-${op.cell_id}-${index}`}
className="rounded-lg border border-border/50 overflow-hidden bg-background/50"
>
<div className="px-3 py-1.5 bg-muted/40 border-b border-border/30 flex items-center gap-2">
<span
className={`text-[10px] font-medium uppercase tracking-wide ${
op.type === "delete"
op.operation === "delete"
? "text-red-600"
: op.type === "add"
: op.operation === "add"
? "text-green-600"
: "text-blue-600"
}`}
>
{op.type}
{op.operation}
</span>
<span className="text-xs text-muted-foreground">
cell_id: {op.cell_id}
@@ -291,7 +293,7 @@ export function ChatMessageDisplay({
setFeedback((prev) => ({ ...prev, [messageId]: value }))
try {
await fetch("/api/log-feedback", {
await fetch(getApiEndpoint("/api/log-feedback"), {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
@@ -314,12 +316,28 @@ export function ChatMessageDisplay({
const handleDisplayChart = useCallback(
(xml: string, showToast = false) => {
const currentXml = xml || ""
let currentXml = xml || ""
const startTime = performance.now()
// During streaming (showToast=false), extract only complete mxCell elements
// This allows progressive rendering even with partial/incomplete trailing XML
if (!showToast) {
const completeCells = extractCompleteMxCells(currentXml)
if (!completeCells) {
return
}
currentXml = completeCells
}
const convertedXml = convertToLegalXml(currentXml)
if (convertedXml !== previousXML.current) {
// Parse and validate XML BEFORE calling replaceNodes
const parser = new DOMParser()
const testDoc = parser.parseFromString(convertedXml, "text/xml")
// Wrap in root element for parsing multiple mxCell elements
const testDoc = parser.parseFromString(
`<root>${convertedXml}</root>`,
"text/xml",
)
const parseError = testDoc.querySelector("parsererror")
if (parseError) {
@@ -346,7 +364,22 @@ export function ChatMessageDisplay({
`<mxfile><diagram name="Page-1" id="page-1"><mxGraphModel><root><mxCell id="0"/><mxCell id="1" parent="0"/></root></mxGraphModel></diagram></mxfile>`
const replacedXML = replaceNodes(baseXML, convertedXml)
// Validate and auto-fix the XML
const xmlProcessTime = performance.now() - startTime
// During streaming (showToast=false), skip heavy validation for lower latency
// The quick DOM parse check above catches malformed XML
// Full validation runs on final output (showToast=true)
if (!showToast) {
previousXML.current = convertedXml
const loadStartTime = performance.now()
onDisplayChart(replacedXML, true)
console.log(
`[Streaming] XML processing: ${xmlProcessTime.toFixed(1)}ms, drawio load: ${(performance.now() - loadStartTime).toFixed(1)}ms`,
)
return
}
// Final output: run full validation and auto-fix
const validation = validateAndFixXml(replacedXML)
if (validation.valid) {
previousXML.current = convertedXml
@@ -359,18 +392,19 @@ export function ChatMessageDisplay({
)
}
// Skip validation in loadDiagram since we already validated above
const loadStartTime = performance.now()
onDisplayChart(xmlToLoad, true)
console.log(
`[Final] XML processing: ${xmlProcessTime.toFixed(1)}ms, validation+load: ${(performance.now() - loadStartTime).toFixed(1)}ms`,
)
} else {
console.error(
"[ChatMessageDisplay] XML validation failed:",
validation.error,
)
// Only show toast if this is the final XML (not during streaming)
if (showToast) {
toast.error(
"Diagram validation failed. Please try regenerating.",
)
}
toast.error(
"Diagram validation failed. Please try regenerating.",
)
}
} catch (error) {
console.error(
@@ -602,17 +636,10 @@ export function ChatMessageDisplay({
}
})
// Cleanup: clear any pending debounce timeout on unmount
return () => {
if (debounceTimeoutRef.current) {
clearTimeout(debounceTimeoutRef.current)
debounceTimeoutRef.current = null
}
if (editDebounceTimeoutRef.current) {
clearTimeout(editDebounceTimeoutRef.current)
editDebounceTimeoutRef.current = null
}
}
// NOTE: Don't cleanup debounce timeouts here!
// The cleanup runs on every re-render (when messages changes),
// which would cancel the timeout before it fires.
// Let the timeouts complete naturally - they're harmless if component unmounts.
}, [messages, handleDisplayChart, chartXML])
const renderToolPart = (part: ToolPartLike) => {

View File

@@ -18,18 +18,26 @@ import { FaGithub } from "react-icons/fa"
import { Toaster, toast } from "sonner"
import { ButtonWithTooltip } from "@/components/button-with-tooltip"
import { ChatInput } from "@/components/chat-input"
import { ModelConfigDialog } from "@/components/model-config-dialog"
import { ResetWarningModal } from "@/components/reset-warning-modal"
import { SettingsDialog } from "@/components/settings-dialog"
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/ui/tooltip"
import { useDiagram } from "@/contexts/diagram-context"
import { useDiagramToolHandlers } from "@/hooks/use-diagram-tool-handlers"
import { useDictionary } from "@/hooks/use-dictionary"
import { getAIConfig } from "@/lib/ai-config"
import { getSelectedAIConfig, useModelConfig } from "@/hooks/use-model-config"
import { getApiEndpoint } from "@/lib/base-path"
import { findCachedResponse } from "@/lib/cached-responses"
import { isPdfFile, isTextFile } from "@/lib/pdf-utils"
import { type FileData, useFileProcessor } from "@/lib/use-file-processor"
import { useQuotaManager } from "@/lib/use-quota-manager"
import { formatXML, isMxCellXmlComplete, wrapWithMxFile } from "@/lib/utils"
import { formatXML } from "@/lib/utils"
import { ChatMessageDisplay } from "./chat-message-display"
import LanguageToggle from "./language-toggle"
import { DevXmlSimulator } from "./dev-xml-simulator"
// localStorage keys for persistence
const STORAGE_MESSAGES_KEY = "next-ai-draw-io-messages"
@@ -71,6 +79,8 @@ const TOOL_ERROR_STATE = "output-error" as const
const DEBUG = process.env.NODE_ENV === "development"
const MAX_AUTO_RETRY_COUNT = 1
const MAX_CONTINUATION_RETRY_COUNT = 2 // Limit for truncation continuation retries
/**
* Check if auto-resubmit should happen based on tool errors.
* Only checks the LAST tool part (most recent tool call), not all tool parts.
@@ -146,7 +156,10 @@ export default function ChatPanel({
const [showHistory, setShowHistory] = useState(false)
const [showSettingsDialog, setShowSettingsDialog] = useState(false)
const [, setAccessCodeRequired] = useState(false)
const [showModelConfigDialog, setShowModelConfigDialog] = useState(false)
// Model configuration hook
const modelConfig = useModelConfig()
const [input, setInput] = useState("")
const [dailyRequestLimit, setDailyRequestLimit] = useState(0)
const [dailyTokenLimit, setDailyTokenLimit] = useState(0)
@@ -164,15 +177,14 @@ export default function ChatPanel({
// Check config on mount
useEffect(() => {
fetch("/api/config")
fetch(getApiEndpoint("/api/config"))
.then((res) => res.json())
.then((data) => {
setAccessCodeRequired(data.accessCodeRequired)
setDailyRequestLimit(data.dailyRequestLimit || 0)
setDailyTokenLimit(data.dailyTokenLimit || 0)
setTpmLimit(data.tpmLimit || 0)
})
.catch(() => setAccessCodeRequired(false))
.catch(() => {})
}, [])
// Quota management using extracted hook
@@ -203,11 +215,10 @@ export default function ChatPanel({
chartXMLRef.current = chartXML
}, [chartXML])
// Ref to hold stop function for use in onToolCall (avoids stale closure)
const stopRef = useRef<(() => void) | null>(null)
// Ref to track consecutive auto-retry count (reset on user action)
const autoRetryCountRef = useRef(0)
// Ref to track continuation retry count (for truncation handling)
const continuationRetryCountRef = useRef(0)
// Ref to accumulate partial XML when output is truncated due to maxOutputTokens
// When partialXmlRef.current.length > 0, we're in continuation mode
@@ -226,6 +237,16 @@ export default function ChatPanel({
> | null>(null)
const LOCAL_STORAGE_DEBOUNCE_MS = 1000 // Save at most once per second
// Diagram tool handlers (display_diagram, edit_diagram, append_diagram)
const { handleToolCall } = useDiagramToolHandlers({
partialXmlRef,
editDiagramOriginalXmlRef,
chartXMLRef,
onDisplayChart,
onFetchChart,
onExport,
})
const {
messages,
sendMessage,
@@ -236,315 +257,49 @@ export default function ChatPanel({
setMessages,
} = useChat({
transport: new DefaultChatTransport({
api: "/api/chat",
api: getApiEndpoint("/api/chat"),
}),
async onToolCall({ toolCall }) {
if (DEBUG) {
console.log(
`[onToolCall] Tool: ${toolCall.toolName}, CallId: ${toolCall.toolCallId}`,
)
}
if (toolCall.toolName === "display_diagram") {
const { xml } = toolCall.input as { xml: string }
// DEBUG: Log raw input to diagnose false truncation detection
console.log(
"[display_diagram] XML ending (last 100 chars):",
xml.slice(-100),
)
console.log("[display_diagram] XML length:", xml.length)
// Check if XML is truncated (incomplete mxCell indicates truncated output)
const isTruncated = !isMxCellXmlComplete(xml)
console.log("[display_diagram] isTruncated:", isTruncated)
if (isTruncated) {
// Store the partial XML for continuation via append_diagram
partialXmlRef.current = xml
// Tell LLM to use append_diagram to continue
const partialEnding = partialXmlRef.current.slice(-500)
addToolOutput({
tool: "display_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `Output was truncated due to length limits. Use the append_diagram tool to continue.
Your output ended with:
\`\`\`
${partialEnding}
\`\`\`
NEXT STEP: Call append_diagram with the continuation XML.
- Do NOT include wrapper tags or root cells (id="0", id="1")
- Start from EXACTLY where you stopped
- Complete all remaining mxCell elements`,
})
return
}
// Complete XML received - use it directly
// (continuation is now handled via append_diagram tool)
const finalXml = xml
partialXmlRef.current = "" // Reset any partial from previous truncation
// Wrap raw XML with full mxfile structure for draw.io
const fullXml = wrapWithMxFile(finalXml)
// loadDiagram validates and returns error if invalid
const validationError = onDisplayChart(fullXml)
if (validationError) {
console.warn(
"[display_diagram] Validation error:",
validationError,
)
// Return error to model - sendAutomaticallyWhen will trigger retry
if (DEBUG) {
console.log(
"[display_diagram] Adding tool output with state: output-error",
)
}
addToolOutput({
tool: "display_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `${validationError}
Please fix the XML issues and call display_diagram again with corrected XML.
Your failed XML:
\`\`\`xml
${finalXml}
\`\`\``,
})
} else {
// Success - diagram will be rendered by chat-message-display
if (DEBUG) {
console.log(
"[display_diagram] Success! Adding tool output with state: output-available",
)
}
addToolOutput({
tool: "display_diagram",
toolCallId: toolCall.toolCallId,
output: "Successfully displayed the diagram.",
})
if (DEBUG) {
console.log(
"[display_diagram] Tool output added. Diagram should be visible now.",
)
}
}
} else if (toolCall.toolName === "edit_diagram") {
const { operations } = toolCall.input as {
operations: Array<{
type: "update" | "add" | "delete"
cell_id: string
new_xml?: string
}>
}
let currentXml = ""
try {
// Use the original XML captured during streaming (shared with chat-message-display)
// This ensures we apply operations to the same base XML that streaming used
const originalXml = editDiagramOriginalXmlRef.current.get(
toolCall.toolCallId,
)
if (originalXml) {
currentXml = originalXml
} else {
// Fallback: use chartXML from ref if streaming didn't capture original
const cachedXML = chartXMLRef.current
if (cachedXML) {
currentXml = cachedXML
} else {
// Last resort: export from iframe
currentXml = await onFetchChart(false)
}
}
const { applyDiagramOperations } = await import(
"@/lib/utils"
)
const { result: editedXml, errors } =
applyDiagramOperations(currentXml, operations)
// Check for operation errors
if (errors.length > 0) {
const errorMessages = errors
.map(
(e) =>
`- ${e.type} on cell_id="${e.cellId}": ${e.message}`,
)
.join("\n")
addToolOutput({
tool: "edit_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `Some operations failed:\n${errorMessages}
Current diagram XML:
\`\`\`xml
${currentXml}
\`\`\`
Please check the cell IDs and retry.`,
})
// Clean up the shared original XML ref
editDiagramOriginalXmlRef.current.delete(
toolCall.toolCallId,
)
return
}
// loadDiagram validates and returns error if invalid
const validationError = onDisplayChart(editedXml)
if (validationError) {
console.warn(
"[edit_diagram] Validation error:",
validationError,
)
addToolOutput({
tool: "edit_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `Edit produced invalid XML: ${validationError}
Current diagram XML:
\`\`\`xml
${currentXml}
\`\`\`
Please fix the operations to avoid structural issues.`,
})
// Clean up the shared original XML ref
editDiagramOriginalXmlRef.current.delete(
toolCall.toolCallId,
)
return
}
onExport()
addToolOutput({
tool: "edit_diagram",
toolCallId: toolCall.toolCallId,
output: `Successfully applied ${operations.length} operation(s) to the diagram.`,
})
// Clean up the shared original XML ref
editDiagramOriginalXmlRef.current.delete(
toolCall.toolCallId,
)
} catch (error) {
console.error("[edit_diagram] Failed:", error)
const errorMessage =
error instanceof Error ? error.message : String(error)
addToolOutput({
tool: "edit_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `Edit failed: ${errorMessage}
Current diagram XML:
\`\`\`xml
${currentXml || "No XML available"}
\`\`\`
Please check cell IDs and retry, or use display_diagram to regenerate.`,
})
// Clean up the shared original XML ref even on error
editDiagramOriginalXmlRef.current.delete(
toolCall.toolCallId,
)
}
} else if (toolCall.toolName === "append_diagram") {
const { xml } = toolCall.input as { xml: string }
// Detect if LLM incorrectly started fresh instead of continuing
// LLM should only output bare mxCells now, so wrapper tags indicate error
const trimmed = xml.trim()
const isFreshStart =
trimmed.startsWith("<mxGraphModel") ||
trimmed.startsWith("<root") ||
trimmed.startsWith("<mxfile") ||
trimmed.startsWith('<mxCell id="0"') ||
trimmed.startsWith('<mxCell id="1"')
if (isFreshStart) {
addToolOutput({
tool: "append_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `ERROR: You started fresh with wrapper tags. Do NOT include wrapper tags or root cells (id="0", id="1").
Continue from EXACTLY where the partial ended:
\`\`\`
${partialXmlRef.current.slice(-500)}
\`\`\`
Start your continuation with the NEXT character after where it stopped.`,
})
return
}
// Append to accumulated XML
partialXmlRef.current += xml
// Check if XML is now complete (last mxCell is complete)
const isComplete = isMxCellXmlComplete(partialXmlRef.current)
if (isComplete) {
// Wrap and display the complete diagram
const finalXml = partialXmlRef.current
partialXmlRef.current = "" // Reset
const fullXml = wrapWithMxFile(finalXml)
const validationError = onDisplayChart(fullXml)
if (validationError) {
addToolOutput({
tool: "append_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `Validation error after assembly: ${validationError}
Assembled XML:
\`\`\`xml
${finalXml.substring(0, 2000)}...
\`\`\`
Please use display_diagram with corrected XML.`,
})
} else {
addToolOutput({
tool: "append_diagram",
toolCallId: toolCall.toolCallId,
output: "Diagram assembly complete and displayed successfully.",
})
}
} else {
// Still incomplete - signal to continue
addToolOutput({
tool: "append_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `XML still incomplete (mxCell not closed). Call append_diagram again to continue.
Current ending:
\`\`\`
${partialXmlRef.current.slice(-500)}
\`\`\`
Continue from EXACTLY where you stopped.`,
})
}
}
onToolCall: async ({ toolCall }) => {
await handleToolCall({ toolCall }, addToolOutput)
},
onError: (error) => {
// Handle server-side quota limit (429 response)
// AI SDK puts the full response body in error.message for non-OK responses
try {
const data = JSON.parse(error.message)
if (data.type === "request") {
quotaManager.showQuotaLimitToast(data.used, data.limit)
return
}
if (data.type === "token") {
quotaManager.showTokenLimitToast(data.used, data.limit)
return
}
if (data.type === "tpm") {
quotaManager.showTPMLimitToast(data.limit)
return
}
} catch {
// Not JSON, fall through to string matching for backwards compatibility
}
// Fallback to string matching
if (error.message.includes("Daily request limit")) {
quotaManager.showQuotaLimitToast()
return
}
if (error.message.includes("Daily token limit")) {
quotaManager.showTokenLimitToast()
return
}
if (
error.message.includes("Rate limit exceeded") ||
error.message.includes("tokens per minute")
) {
quotaManager.showTPMLimitToast()
return
}
// Silence access code error in console since it's handled by UI
if (!error.message.includes("Invalid or missing access code")) {
console.error("Chat error:", error)
@@ -609,8 +364,7 @@ Continue from EXACTLY where you stopped.`,
})
if (error.message.includes("Invalid or missing access code")) {
// Show settings button and open dialog to help user fix it
setAccessCodeRequired(true)
// Show settings dialog to help user fix it
setShowSettingsDialog(true)
}
},
@@ -622,22 +376,6 @@ Continue from EXACTLY where you stopped.`,
// DEBUG: Log finish reason to diagnose truncation
console.log("[onFinish] finishReason:", metadata?.finishReason)
console.log("[onFinish] metadata:", metadata)
if (metadata) {
// Use Number.isFinite to guard against NaN (typeof NaN === 'number' is true)
const inputTokens = Number.isFinite(metadata.inputTokens)
? (metadata.inputTokens as number)
: 0
const outputTokens = Number.isFinite(metadata.outputTokens)
? (metadata.outputTokens as number)
: 0
const actualTokens = inputTokens + outputTokens
if (actualTokens > 0) {
quotaManager.incrementTokenCount(actualTokens)
quotaManager.incrementTPMCount(actualTokens)
}
}
},
sendAutomaticallyWhen: ({ messages }) => {
const isInContinuationMode = partialXmlRef.current.length > 0
@@ -649,15 +387,25 @@ Continue from EXACTLY where you stopped.`,
if (!shouldRetry) {
// No error, reset retry count and clear state
autoRetryCountRef.current = 0
continuationRetryCountRef.current = 0
partialXmlRef.current = ""
return false
}
// Continuation mode: unlimited retries (truncation continuation, not real errors)
// Server limits to 5 steps via stepCountIs(5)
// Continuation mode: limited retries for truncation handling
if (isInContinuationMode) {
// Don't count against retry limit for continuation
// Quota checks still apply below
if (
continuationRetryCountRef.current >=
MAX_CONTINUATION_RETRY_COUNT
) {
toast.error(
`Continuation retry limit reached (${MAX_CONTINUATION_RETRY_COUNT}). The diagram may be too complex.`,
)
continuationRetryCountRef.current = 0
partialXmlRef.current = ""
return false
}
continuationRetryCountRef.current++
} else {
// Regular error: check retry count limit
if (autoRetryCountRef.current >= MAX_AUTO_RETRY_COUNT) {
@@ -672,30 +420,10 @@ Continue from EXACTLY where you stopped.`,
autoRetryCountRef.current++
}
// Check quota limits before auto-retry
const tokenLimitCheck = quotaManager.checkTokenLimit()
if (!tokenLimitCheck.allowed) {
quotaManager.showTokenLimitToast(tokenLimitCheck.used)
autoRetryCountRef.current = 0
partialXmlRef.current = ""
return false
}
const tpmCheck = quotaManager.checkTPMLimit()
if (!tpmCheck.allowed) {
quotaManager.showTPMLimitToast()
autoRetryCountRef.current = 0
partialXmlRef.current = ""
return false
}
return true
},
})
// Update stopRef so onToolCall can access it
stopRef.current = stop
// Ref to track latest messages for unload persistence
const messagesRef = useRef(messages)
useEffect(() => {
@@ -905,9 +633,6 @@ Continue from EXACTLY where you stopped.`,
xmlSnapshotsRef.current.set(messageIndex, chartXml)
saveXmlSnapshots()
// Check all quota limits
if (!checkAllQuotaLimits()) return
sendChatMessage(parts, chartXml, previousXml, sessionId)
// Token count is tracked in onFinish with actual server usage
@@ -985,30 +710,7 @@ Continue from EXACTLY where you stopped.`,
saveXmlSnapshots()
}
// Check all quota limits (daily requests, tokens, TPM)
const checkAllQuotaLimits = (): boolean => {
const limitCheck = quotaManager.checkDailyLimit()
if (!limitCheck.allowed) {
quotaManager.showQuotaLimitToast()
return false
}
const tokenLimitCheck = quotaManager.checkTokenLimit()
if (!tokenLimitCheck.allowed) {
quotaManager.showTokenLimitToast(tokenLimitCheck.used)
return false
}
const tpmCheck = quotaManager.checkTPMLimit()
if (!tpmCheck.allowed) {
quotaManager.showTPMLimitToast()
return false
}
return true
}
// Send chat message with headers and increment quota
// Send chat message with headers
const sendChatMessage = (
parts: any,
xml: string,
@@ -1017,9 +719,10 @@ Continue from EXACTLY where you stopped.`,
) => {
// Reset all retry/continuation state on user-initiated message
autoRetryCountRef.current = 0
continuationRetryCountRef.current = 0
partialXmlRef.current = ""
const config = getAIConfig()
const config = getSelectedAIConfig()
sendMessage(
{ parts },
@@ -1036,6 +739,20 @@ Continue from EXACTLY where you stopped.`,
"x-ai-api-key": config.aiApiKey,
}),
...(config.aiModel && { "x-ai-model": config.aiModel }),
// AWS Bedrock credentials
...(config.awsAccessKeyId && {
"x-aws-access-key-id": config.awsAccessKeyId,
}),
...(config.awsSecretAccessKey && {
"x-aws-secret-access-key":
config.awsSecretAccessKey,
}),
...(config.awsRegion && {
"x-aws-region": config.awsRegion,
}),
...(config.awsSessionToken && {
"x-aws-session-token": config.awsSessionToken,
}),
}),
...(minimalStyle && {
"x-minimal-style": "true",
@@ -1043,7 +760,6 @@ Continue from EXACTLY where you stopped.`,
},
},
)
quotaManager.incrementRequestCount()
}
// Process files and append content to user text (handles PDF, text, and optionally images)
@@ -1131,13 +847,8 @@ Continue from EXACTLY where you stopped.`,
setMessages(newMessages)
})
// Check all quota limits
if (!checkAllQuotaLimits()) return
// Now send the message after state is guaranteed to be updated
sendChatMessage(userParts, savedXml, previousXml, sessionId)
// Token count is tracked in onFinish with actual server usage
}
const handleEditMessage = async (messageIndex: number, newText: string) => {
@@ -1179,12 +890,8 @@ Continue from EXACTLY where you stopped.`,
setMessages(newMessages)
})
// Check all quota limits
if (!checkAllQuotaLimits()) return
// Now send the edited message after state is guaranteed to be updated
sendChatMessage(newParts, savedXml, previousXml, sessionId)
// Token count is tracked in onFinish with actual server usage
}
// Collapsed view (desktop only)
@@ -1248,32 +955,18 @@ Continue from EXACTLY where you stopped.`,
Next AI Drawio
</h1>
</div>
{!isMobile && (
<Link
href="/about"
target="_blank"
rel="noopener noreferrer"
className="text-sm text-muted-foreground hover:text-foreground transition-colors ml-2"
>
About
</Link>
)}
{!isMobile && (
<Link
href="/about"
target="_blank"
rel="noopener noreferrer"
>
<ButtonWithTooltip
tooltipContent="Due to high usage, I have changed the model to minimax-m2 and added some usage limits. See About page for details."
variant="ghost"
size="icon"
className="h-6 w-6 text-amber-500 hover:text-amber-600"
{!isMobile &&
process.env.NEXT_PUBLIC_SHOW_ABOUT_AND_NOTICE ===
"true" && (
<Link
href="/about"
target="_blank"
rel="noopener noreferrer"
className="text-sm text-muted-foreground hover:text-foreground transition-colors ml-2"
>
<AlertTriangle className="h-4 w-4" />
</ButtonWithTooltip>
</Link>
)}
About
</Link>
)}
</div>
<div className="flex items-center gap-1 justify-end overflow-visible">
<ButtonWithTooltip
@@ -1288,16 +981,23 @@ Continue from EXACTLY where you stopped.`,
/>
</ButtonWithTooltip>
<div className="w-px h-5 bg-border mx-1" />
<a
href="https://github.com/DayuanJiang/next-ai-draw-io"
target="_blank"
rel="noopener noreferrer"
className="p-2 rounded-lg text-muted-foreground hover:text-foreground hover:bg-accent transition-colors"
>
<FaGithub
className={`${isMobile ? "w-4 h-4" : "w-5 h-5"}`}
/>
</a>
<Tooltip>
<TooltipTrigger asChild>
<a
href="https://github.com/DayuanJiang/next-ai-draw-io"
target="_blank"
rel="noopener noreferrer"
className="inline-flex items-center justify-center h-9 w-9 rounded-md text-muted-foreground hover:text-foreground hover:bg-accent transition-colors"
>
<FaGithub
className={`${isMobile ? "w-4 h-4" : "w-5 h-5"}`}
/>
</a>
</TooltipTrigger>
<TooltipContent>{dict.nav.github}</TooltipContent>
</Tooltip>
<ButtonWithTooltip
tooltipContent={dict.nav.settings}
variant="ghost"
@@ -1310,7 +1010,6 @@ Continue from EXACTLY where you stopped.`,
/>
</ButtonWithTooltip>
<div className="hidden sm:flex items-center gap-2">
<LanguageToggle />
{!isMobile && (
<ButtonWithTooltip
tooltipContent={dict.nav.hidePanel}
@@ -1342,6 +1041,14 @@ Continue from EXACTLY where you stopped.`,
/>
</main>
{/* Dev XML Streaming Simulator - only in development */}
{DEBUG && (
<DevXmlSimulator
setMessages={setMessages}
onDisplayChart={onDisplayChart}
/>
)}
{/* Input */}
<footer
className={`${isMobile ? "p-2" : "p-4"} border-t border-border/50 bg-card/50`}
@@ -1361,6 +1068,10 @@ Continue from EXACTLY where you stopped.`,
error={error}
minimalStyle={minimalStyle}
onMinimalStyleChange={setMinimalStyle}
models={modelConfig.models}
selectedModelId={modelConfig.selectedModelId}
onModelSelect={modelConfig.setSelectedModelId}
onConfigureModels={() => setShowModelConfigDialog(true)}
/>
</footer>
@@ -1374,6 +1085,12 @@ Continue from EXACTLY where you stopped.`,
onToggleDarkMode={onToggleDarkMode}
/>
<ModelConfigDialog
open={showModelConfigDialog}
onOpenChange={setShowModelConfigDialog}
modelConfig={modelConfig}
/>
<ResetWarningModal
open={showNewChatDialog}
onOpenChange={setShowNewChatDialog}

View File

@@ -0,0 +1,350 @@
"use client"
import { useEffect, useRef, useState } from "react"
import { wrapWithMxFile } from "@/lib/utils"
// Dev XML presets for streaming simulator
const DEV_XML_PRESETS: Record<string, string> = {
"Simple Box": `<mxCell id="2" value="Hello World" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="120" y="100" width="120" height="60" as="geometry"/>
</mxCell>`,
"Two Boxes with Arrow": `<mxCell id="2" value="Start" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;" vertex="1" parent="1">
<mxGeometry x="100" y="100" width="100" height="50" as="geometry"/>
</mxCell>
<mxCell id="3" value="End" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f8cecc;strokeColor=#b85450;" vertex="1" parent="1">
<mxGeometry x="300" y="100" width="100" height="50" as="geometry"/>
</mxCell>
<mxCell id="4" value="" style="endArrow=classic;html=1;" edge="1" parent="1" source="2" target="3">
<mxGeometry relative="1" as="geometry"/>
</mxCell>`,
Flowchart: `<mxCell id="2" value="Start" style="ellipse;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;" vertex="1" parent="1">
<mxGeometry x="160" y="40" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="3" value="Process A" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="140" y="120" width="120" height="60" as="geometry"/>
</mxCell>
<mxCell id="4" value="Decision" style="rhombus;whiteSpace=wrap;html=1;fillColor=#fff2cc;strokeColor=#d6b656;" vertex="1" parent="1">
<mxGeometry x="150" y="220" width="100" height="80" as="geometry"/>
</mxCell>
<mxCell id="5" value="Process B" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="300" y="230" width="120" height="60" as="geometry"/>
</mxCell>
<mxCell id="6" value="End" style="ellipse;whiteSpace=wrap;html=1;fillColor=#f8cecc;strokeColor=#b85450;" vertex="1" parent="1">
<mxGeometry x="160" y="340" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="7" style="endArrow=classic;html=1;" edge="1" parent="1" source="2" target="3">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="8" style="endArrow=classic;html=1;" edge="1" parent="1" source="3" target="4">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="9" value="Yes" style="endArrow=classic;html=1;" edge="1" parent="1" source="4" target="6">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="10" value="No" style="endArrow=classic;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;" edge="1" parent="1" source="4" target="5">
<mxGeometry relative="1" as="geometry"/>
</mxCell>`,
"Truncated (Error Test)": `<mxCell id="2" value="This cell is truncated" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="120" y="100" width="120" height="60" as="geometry"/>
</mxCell>
<mxCell id="3" value="Incomplete" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f8cecc;strokeColor`,
"HTML Escape + Cell Truncate": `<mxCell id="2" value="<b>Chain-of-Thought Prompting</b><br/><font size='12'>Eliciting Reasoning in Large Language Models</font>" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=16;fontStyle=1;" vertex="1" parent="1">
<mxGeometry x="40" y="40" width="720" height="60" as="geometry"/>
</mxCell>
<mxCell id="3" value="<b>Problem: LLM Reasoning Limitations</b><br/>• Scaling parameters alone insufficient for logical tasks<br/>• Arithmetic, commonsense, symbolic reasoning challenges<br/>• Standard prompting fails on multi-step problems" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="40" y="120" width="340" height="120" as="geometry"/>
</mxCell>
<mxCell id="4" value="<b>Traditional Approaches</b><br/>1. <b>Finetuning:</b> Expensive, task-specific<br/>2. <b>Standard Few-Shot:</b> Input→Output pairs<br/> (No explanation of reasoning)" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#fff2cc;strokeColor=#d6b656;" vertex="1" parent="1">
<mxGeometry x="420" y="120" width="340" height="120" as="geometry"/>
</mxCell>
<mxCell id="5" value="<b>CoT Methodology</b><br/>• Add reasoning steps to few-shot examples<br/>• Natural language intermediate steps<br/>• No parameter updates needed<br/>• Model learns to generate own thought process" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;" vertex="1" parent="1">
<mxGeometry x="40" y="260" width="340" height="100" as="geometry"/>
</mxCell>
<mxCell id="6" value="<b>Example Comparison</b><br/><b>Standard:</b><br/>Q: Roger has 5 balls. He buys 2 cans of 3 balls. How many?<br/>A: 11.<br/><br/><b>CoT:</b><br/>Q: Roger has 5 balls. He buys 2 cans of 3 balls. How many?<br/>A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11." style="rounded=1;whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;" vertex="1" parent="1">
<mxGeometry x="420" y="260" width="340" height="140" as="geometry"/>
</mxCell>
<mxCell id="7" value="<b>Experimental Models</b><br/>• GPT-3 (175B)<br/>• LaMDA (137B)<br/>• PaLM (540B)<br/>• UL2 (20B)<br/>• Codex" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f8cecc;strokeColor=#b85450;" vertex="1" parent="1">
<mxGeometry x="40" y="380" width="340" height="100" as="geometry"/>
</mxCell>
<mxCell id="8" value="<b>Reasoning Domains Tested</b><br/>1. <b>Arithmetic:</b> GSM8K, SVAMP, ASDiv, AQuA, MAWPS<br/>2. <b>Commonsense:</b> CSQA, StrategyQA, Date Understanding, Sports Understanding<br/>3. <b>Symbolic:</b> Last Letter Concatenation, Coin Flip" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f5f5f5;strokeColor=#666666;" vertex="1" parent="1">
<mxGeometry x="420" y="420" width="340" height="100" as="geometry"/>
</mxCell>
<mxCell id="9" value="<b>Key Results: Arithmetic</b><br/>• PaLM 540B + CoT: <b>56.9%</b> on GSM8K<br/> (vs 17.9% standard)<br/>• Surpassed finetuned GPT-3 (55%)<br/>• With calculator: <b>58.6%</b>" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;" vertex="1" parent="1">
<mxGeometry x="40" y="500" width="220" height="100" as="geometry"/>
</mxCell>
<mxCell id="10" value="<b>Key Results: Commonsense</b><br/>• StrategyQA: <b>75.6%</b><br/> (vs 69.4% SOTA)<br/>• Sports Understanding: <b>95.4%</b><br/> (vs 84% human)" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;" vertex="1" parent="1">
<mxGeometry x="280" y="500" width="220" height="100" as="geometry"/>
</mxCell>
<mxCell id="11" value="<b>Key Results: Symbolic</b><br/>• OOD Generalization<br/>• Coin Flip: Trained on 2 flips<br/> Works on 3-4 flips with CoT<br/>• Standard prompting fails" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;" vertex="1" parent="1">
<mxGeometry x="540" y="500" width="220" height="100" as="geometry"/>
</mxCell>
<mxCell id="12" value="<b>Emergent Ability of Scale</b><br/>• Small models (&lt;10B): No benefit, often harmful<br/>• Large models (100B+): Reasoning emerges<br/>• CoT gains increase dramatically with scale" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="40" y="620" width="340" height="80" as="geometry"/>
</mxCell>
<mxCell id="13" value="<b>Ablation Studies</b><br/>1. Equation only: Worse than CoT<br/>2. Variable compute (...): No improvement<br/>3. Answer first, then reasoning: Same as baseline<br/>→ Content matters, not just extra tokens" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#fff2cc;strokeColor=#d6b656;" vertex="1" parent="1">
<mxGeometry x="420" y="620" width="340" height="80" as="geometry"/>
</mxCell>
<mxCell id="14" value="<b>Error Analysis</b><br/>• Semantic understanding errors<br/>• One-step missing errors<br/>• Calculation errors<br/>• Larger models reduce semantic/missing-step errors" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f8cecc;strokeColor=#b85450;" vertex="1" parent="1">
<mxGeometry x="40" y="720" width="340" height="80" as="geometry"/>
</mxCell>
<mxCell id="15" value="<b>Conclusion</b><br/>• CoT unlocks reasoning potential<br/>• Simple paradigm: &quot;show your work&quot;<br/>• Emergent capability of large models<br/>• No specialized architecture needed" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="420" y="720" width="340" height="80" as="geometry"/>
</mxCell>
<mxCell id="16" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;entryX=0.5;entryY=0;" edge="1" parent="1" source="3" target="5">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="17" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;entryX=0.5;entryY=0;" edge="1" parent="1" source="4" target="6">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="18" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;entryX=0.5;entryY=0;" edge="1" parent="1" source="5" target="7">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="19" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;entryX=0.5;entryY=0;" edge="1" parent="1" source="6" target="8">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="20" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;entryX=0.25;entryY=0;" edge="1" parent="1" source="7" target="9">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="21" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;entryX=0.5;entryY=0;" edge="1" parent="1" source="7" target="10">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="22" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;entryX=0.75;entryY=0;" edge="1" parent="1" source="7" target="11">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="23" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;entryX=0.5;entryY=0;" edge="1" parent="1" source="9" target="12">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="24" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;entryX=0.5;entryY=0;" edge="1" parent="1" source="10" target="13">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="25" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;entryX=0.5;entryY=0;" edge="1" parent="1" source="11" target="14">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="26" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;entryX=0.5;entryY=0;" edge="1" parent="1" source="12" target="15">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="27" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;entryX=0.5;entryY=0;" edge="1" parent="1" source="13" target="15">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="28" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;entryX=0.5;entryY=0;" edge="1" parent="1" source="14" target="15">
<mxGeometry relative="1" as="geometry"/>
</mxCell>`,
}
interface DevXmlSimulatorProps {
setMessages: React.Dispatch<React.SetStateAction<any[]>>
onDisplayChart: (xml: string) => void
}
export function DevXmlSimulator({
setMessages,
onDisplayChart,
}: DevXmlSimulatorProps) {
const [devXml, setDevXml] = useState("")
const [isSimulating, setIsSimulating] = useState(false)
const [devIntervalMs, setDevIntervalMs] = useState(1)
const [devChunkSize, setDevChunkSize] = useState(10)
const devStopRef = useRef(false)
const devXmlInitializedRef = useRef(false)
// Restore dev XML from localStorage on mount (after hydration)
useEffect(() => {
const saved = localStorage.getItem("dev-xml-simulator")
if (saved) setDevXml(saved)
devXmlInitializedRef.current = true
}, [])
// Save dev XML to localStorage (only after initial load)
useEffect(() => {
if (devXmlInitializedRef.current) {
localStorage.setItem("dev-xml-simulator", devXml)
}
}, [devXml])
const handleDevSimulate = async () => {
if (!devXml.trim() || isSimulating) return
setIsSimulating(true)
devStopRef.current = false
const toolCallId = `dev-sim-${Date.now()}`
const xml = devXml.trim()
// Add user message and initial assistant message with empty XML
const userMsg = {
id: `user-${Date.now()}`,
role: "user" as const,
parts: [
{
type: "text" as const,
text: "[Dev] Simulating XML streaming",
},
],
}
const assistantMsg = {
id: `assistant-${Date.now()}`,
role: "assistant" as const,
parts: [
{
type: "tool-display_diagram" as const,
toolCallId,
state: "input-streaming" as const,
input: { xml: "" },
},
],
}
setMessages((prev) => [...prev, userMsg, assistantMsg] as any)
// Stream characters progressively
for (let i = 0; i < xml.length; i += devChunkSize) {
if (devStopRef.current) {
setIsSimulating(false)
return
}
const chunk = xml.slice(0, i + devChunkSize)
setMessages((prev) => {
const updated = [...prev]
const lastMsg = updated[updated.length - 1] as any
if (lastMsg?.role === "assistant" && lastMsg.parts?.[0]) {
lastMsg.parts[0].input = { xml: chunk }
}
return updated
})
await new Promise((r) => setTimeout(r, devIntervalMs))
}
if (devStopRef.current) {
setIsSimulating(false)
return
}
// Finalize: set state to output-available
setMessages((prev) => {
const updated = [...prev]
const lastMsg = updated[updated.length - 1] as any
if (lastMsg?.role === "assistant" && lastMsg.parts?.[0]) {
lastMsg.parts[0].state = "output-available"
lastMsg.parts[0].output = "Successfully displayed the diagram."
lastMsg.parts[0].input = { xml }
}
return updated
})
// Display the final diagram
const fullXml = wrapWithMxFile(xml)
onDisplayChart(fullXml)
setIsSimulating(false)
}
return (
<div className="border-t border-dashed border-orange-500/50 px-4 py-2 bg-orange-50/50 dark:bg-orange-950/30">
<details>
<summary className="text-xs text-orange-600 dark:text-orange-400 cursor-pointer font-medium">
Dev: XML Streaming Simulator
</summary>
<div className="mt-2 space-y-2">
<div className="flex items-center gap-2">
<label className="text-xs text-muted-foreground whitespace-nowrap">
Preset:
</label>
<select
onChange={(e) => {
if (e.target.value) {
setDevXml(DEV_XML_PRESETS[e.target.value])
}
}}
className="flex-1 text-xs p-1 border rounded bg-background"
defaultValue=""
>
<option value="" disabled>
Select a preset...
</option>
{Object.keys(DEV_XML_PRESETS).map((name) => (
<option key={name} value={name}>
{name}
</option>
))}
</select>
<button
type="button"
onClick={() => setDevXml("")}
className="px-2 py-1 text-xs text-muted-foreground hover:text-foreground border rounded"
>
Clear
</button>
</div>
<textarea
value={devXml}
onChange={(e) => setDevXml(e.target.value)}
placeholder="Paste mxCell XML here or select a preset..."
className="w-full h-24 text-xs font-mono p-2 border rounded bg-background"
/>
<div className="flex items-center gap-4">
<div className="flex items-center gap-2 flex-1">
<label className="text-xs text-muted-foreground whitespace-nowrap">
Interval:
</label>
<input
type="range"
min="1"
max="200"
step="1"
value={devIntervalMs}
onChange={(e) =>
setDevIntervalMs(Number(e.target.value))
}
className="flex-1 h-1 accent-orange-500"
/>
<span className="text-xs text-muted-foreground w-12">
{devIntervalMs}ms
</span>
</div>
<div className="flex items-center gap-2">
<label className="text-xs text-muted-foreground whitespace-nowrap">
Chars:
</label>
<input
type="number"
min="1"
max="100"
value={devChunkSize}
onChange={(e) =>
setDevChunkSize(
Math.max(1, Number(e.target.value)),
)
}
className="w-14 text-xs p-1 border rounded bg-background"
/>
</div>
</div>
<div className="flex gap-2">
<button
type="button"
onClick={handleDevSimulate}
disabled={isSimulating || !devXml.trim()}
className="px-3 py-1 text-xs bg-orange-500 text-white rounded hover:bg-orange-600 disabled:opacity-50 disabled:cursor-not-allowed"
>
{isSimulating
? "Streaming..."
: `Simulate (${devChunkSize} chars/${devIntervalMs}ms)`}
</button>
{isSimulating && (
<button
type="button"
onClick={() => {
devStopRef.current = true
}}
className="px-3 py-1 text-xs bg-red-500 text-white rounded hover:bg-red-600"
>
Stop
</button>
)}
</div>
</div>
</details>
</div>
)
}

View File

@@ -1,108 +0,0 @@
"use client"
import { Globe } from "lucide-react"
import { usePathname, useRouter, useSearchParams } from "next/navigation"
import { Suspense, useEffect, useRef, useState } from "react"
import { i18n, type Locale } from "@/lib/i18n/config"
const LABELS: Record<string, string> = {
en: "EN",
zh: "中文",
ja: "日本語",
}
function LanguageToggleInner({ className = "" }: { className?: string }) {
const router = useRouter()
const pathname = usePathname() || "/"
const search = useSearchParams()
const [open, setOpen] = useState(false)
const [value, setValue] = useState<Locale>(i18n.defaultLocale)
const ref = useRef<HTMLDivElement | null>(null)
useEffect(() => {
const seg = pathname.split("/").filter(Boolean)
const first = seg[0]
if (first && i18n.locales.includes(first as Locale))
setValue(first as Locale)
else setValue(i18n.defaultLocale)
}, [pathname])
useEffect(() => {
function onDoc(e: MouseEvent) {
if (!ref.current) return
if (!ref.current.contains(e.target as Node)) setOpen(false)
}
if (open) document.addEventListener("mousedown", onDoc)
return () => document.removeEventListener("mousedown", onDoc)
}, [open])
const changeLocale = (lang: string) => {
const parts = pathname.split("/")
if (parts.length > 1 && i18n.locales.includes(parts[1] as Locale)) {
parts[1] = lang
} else {
parts.splice(1, 0, lang)
}
const newPath = parts.join("/") || "/"
const searchStr = search?.toString() ? `?${search.toString()}` : ""
setOpen(false)
router.push(newPath + searchStr)
}
return (
<div className={`relative inline-flex ${className}`} ref={ref}>
<button
aria-haspopup="menu"
aria-expanded={open}
onClick={() => setOpen((s) => !s)}
className="p-2 rounded-full hover:bg-accent/20 transition-colors text-muted-foreground"
aria-label="Change language"
>
<Globe className="w-5 h-5" />
</button>
{open && (
<div className="absolute right-0 top-full mt-2 w-40 bg-popover dark:bg-popover text-popover-foreground rounded-xl shadow-md border border-border/30 overflow-hidden z-50">
<div className="grid gap-0 divide-y divide-border/30">
{i18n.locales.map((loc) => (
<button
key={loc}
onClick={() => changeLocale(loc)}
className={`flex items-center gap-2 px-4 py-2 text-sm w-full text-left hover:bg-accent/10 transition-colors ${value === loc ? "bg-accent/10 font-semibold" : ""}`}
>
<span className="flex-1">
{LABELS[loc] ?? loc}
</span>
{value === loc && (
<span className="text-xs opacity-70">
</span>
)}
</button>
))}
</div>
</div>
)}
</div>
)
}
export default function LanguageToggle({
className = "",
}: {
className?: string
}) {
return (
<Suspense
fallback={
<button
className="p-2 rounded-full text-muted-foreground opacity-50"
disabled
>
<Globe className="w-5 h-5" />
</button>
}
>
<LanguageToggleInner className={className} />
</Suspense>
)
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,222 @@
"use client"
import { Bot, Check, ChevronDown, Server, Settings2 } from "lucide-react"
import { useMemo, useState } from "react"
import {
ModelSelectorContent,
ModelSelectorEmpty,
ModelSelectorGroup,
ModelSelectorInput,
ModelSelectorItem,
ModelSelectorList,
ModelSelectorLogo,
ModelSelectorName,
ModelSelector as ModelSelectorRoot,
ModelSelectorSeparator,
ModelSelectorTrigger,
} from "@/components/ai-elements/model-selector"
import { ButtonWithTooltip } from "@/components/button-with-tooltip"
import { useDictionary } from "@/hooks/use-dictionary"
import type { FlattenedModel } from "@/lib/types/model-config"
import { cn } from "@/lib/utils"
interface ModelSelectorProps {
models: FlattenedModel[]
selectedModelId: string | undefined
onSelect: (modelId: string | undefined) => void
onConfigure: () => void
disabled?: boolean
}
// Map our provider names to models.dev logo names
const PROVIDER_LOGO_MAP: Record<string, string> = {
openai: "openai",
anthropic: "anthropic",
google: "google",
azure: "azure",
bedrock: "amazon-bedrock",
openrouter: "openrouter",
deepseek: "deepseek",
siliconflow: "siliconflow",
gateway: "vercel",
}
// Group models by providerLabel (handles duplicate providers)
function groupModelsByProvider(
models: FlattenedModel[],
): Map<string, { provider: string; models: FlattenedModel[] }> {
const groups = new Map<
string,
{ provider: string; models: FlattenedModel[] }
>()
for (const model of models) {
const key = model.providerLabel
const existing = groups.get(key)
if (existing) {
existing.models.push(model)
} else {
groups.set(key, { provider: model.provider, models: [model] })
}
}
return groups
}
export function ModelSelector({
models,
selectedModelId,
onSelect,
onConfigure,
disabled = false,
}: ModelSelectorProps) {
const dict = useDictionary()
const [open, setOpen] = useState(false)
// Only show validated models in the selector
const validatedModels = useMemo(
() => models.filter((m) => m.validated === true),
[models],
)
const groupedModels = useMemo(
() => groupModelsByProvider(validatedModels),
[validatedModels],
)
// Find selected model for display
const selectedModel = useMemo(
() => models.find((m) => m.id === selectedModelId),
[models, selectedModelId],
)
const handleSelect = (value: string) => {
if (value === "__configure__") {
onConfigure()
} else if (value === "__server_default__") {
onSelect(undefined)
} else {
onSelect(value)
}
setOpen(false)
}
const tooltipContent = selectedModel
? `${selectedModel.modelId} ${dict.modelConfig.clickToChange}`
: `${dict.modelConfig.usingServerDefault} ${dict.modelConfig.clickToChange}`
return (
<ModelSelectorRoot open={open} onOpenChange={setOpen}>
<ModelSelectorTrigger asChild>
<ButtonWithTooltip
tooltipContent={tooltipContent}
variant="ghost"
size="sm"
disabled={disabled}
className="hover:bg-accent gap-1.5 h-8 max-w-[180px] px-2"
>
<Bot className="h-4 w-4 flex-shrink-0 text-muted-foreground" />
<span className="text-xs truncate">
{selectedModel
? selectedModel.modelId
: dict.modelConfig.default}
</span>
<ChevronDown className="h-3 w-3 flex-shrink-0 text-muted-foreground" />
</ButtonWithTooltip>
</ModelSelectorTrigger>
<ModelSelectorContent title={dict.modelConfig.selectModel}>
<ModelSelectorInput
placeholder={dict.modelConfig.searchModels}
/>
<ModelSelectorList className="[&::-webkit-scrollbar]:hidden [-ms-overflow-style:none] [scrollbar-width:none]">
<ModelSelectorEmpty>
{validatedModels.length === 0 && models.length > 0
? dict.modelConfig.noVerifiedModels
: dict.modelConfig.noModelsFound}
</ModelSelectorEmpty>
{/* Server Default Option */}
<ModelSelectorGroup heading={dict.modelConfig.default}>
<ModelSelectorItem
value="__server_default__"
onSelect={handleSelect}
className={cn(
"cursor-pointer",
!selectedModelId && "bg-accent",
)}
>
<Check
className={cn(
"mr-2 h-4 w-4",
!selectedModelId
? "opacity-100"
: "opacity-0",
)}
/>
<Server className="mr-2 h-4 w-4 text-muted-foreground" />
<ModelSelectorName>
{dict.modelConfig.serverDefault}
</ModelSelectorName>
</ModelSelectorItem>
</ModelSelectorGroup>
{/* Configured Models by Provider */}
{Array.from(groupedModels.entries()).map(
([
providerLabel,
{ provider, models: providerModels },
]) => (
<ModelSelectorGroup
key={providerLabel}
heading={providerLabel}
>
{providerModels.map((model) => (
<ModelSelectorItem
key={model.id}
value={model.modelId}
onSelect={() => handleSelect(model.id)}
className="cursor-pointer"
>
<Check
className={cn(
"mr-2 h-4 w-4",
selectedModelId === model.id
? "opacity-100"
: "opacity-0",
)}
/>
<ModelSelectorLogo
provider={
PROVIDER_LOGO_MAP[provider] ||
provider
}
className="mr-2"
/>
<ModelSelectorName>
{model.modelId}
</ModelSelectorName>
</ModelSelectorItem>
))}
</ModelSelectorGroup>
),
)}
{/* Configure Option */}
<ModelSelectorSeparator />
<ModelSelectorGroup>
<ModelSelectorItem
value="__configure__"
onSelect={handleSelect}
className="cursor-pointer"
>
<Settings2 className="mr-2 h-4 w-4" />
<ModelSelectorName>
{dict.modelConfig.configureModels}
</ModelSelectorName>
</ModelSelectorItem>
</ModelSelectorGroup>
{/* Info text */}
<div className="px-3 py-2 text-xs text-muted-foreground border-t">
{dict.modelConfig.onlyVerifiedShown}
</div>
</ModelSelectorList>
</ModelSelectorContent>
</ModelSelectorRoot>
)
}

View File

@@ -1,7 +1,8 @@
"use client"
import { Moon, Sun } from "lucide-react"
import { useEffect, useState } from "react"
import { usePathname, useRouter, useSearchParams } from "next/navigation"
import { Suspense, useEffect, useState } from "react"
import { Button } from "@/components/ui/button"
import {
Dialog,
@@ -21,6 +22,40 @@ import {
} from "@/components/ui/select"
import { Switch } from "@/components/ui/switch"
import { useDictionary } from "@/hooks/use-dictionary"
import { getApiEndpoint } from "@/lib/base-path"
import { i18n, type Locale } from "@/lib/i18n/config"
import { cn } from "@/lib/utils"
// Reusable setting item component for consistent layout
function SettingItem({
label,
description,
children,
}: {
label: string
description?: string
children: React.ReactNode
}) {
return (
<div className="flex items-center justify-between py-4 first:pt-0 last:pb-0">
<div className="space-y-0.5 pr-4">
<Label className="text-sm font-medium">{label}</Label>
{description && (
<p className="text-xs text-muted-foreground max-w-[260px]">
{description}
</p>
)}
</div>
<div className="shrink-0">{children}</div>
</div>
)
}
const LANGUAGE_LABELS: Record<Locale, string> = {
en: "English",
zh: "中文",
ja: "日本語",
}
interface SettingsDialogProps {
open: boolean
@@ -35,10 +70,6 @@ interface SettingsDialogProps {
export const STORAGE_ACCESS_CODE_KEY = "next-ai-draw-io-access-code"
export const STORAGE_CLOSE_PROTECTION_KEY = "next-ai-draw-io-close-protection"
const STORAGE_ACCESS_CODE_REQUIRED_KEY = "next-ai-draw-io-access-code-required"
export const STORAGE_AI_PROVIDER_KEY = "next-ai-draw-io-ai-provider"
export const STORAGE_AI_BASE_URL_KEY = "next-ai-draw-io-ai-base-url"
export const STORAGE_AI_API_KEY_KEY = "next-ai-draw-io-ai-api-key"
export const STORAGE_AI_MODEL_KEY = "next-ai-draw-io-ai-model"
function getStoredAccessCodeRequired(): boolean | null {
if (typeof window === "undefined") return null
@@ -47,7 +78,7 @@ function getStoredAccessCodeRequired(): boolean | null {
return stored === "true"
}
export function SettingsDialog({
function SettingsContent({
open,
onOpenChange,
onCloseProtectionChange,
@@ -57,6 +88,9 @@ export function SettingsDialog({
onToggleDarkMode,
}: SettingsDialogProps) {
const dict = useDictionary()
const router = useRouter()
const pathname = usePathname() || "/"
const search = useSearchParams()
const [accessCode, setAccessCode] = useState("")
const [closeProtection, setCloseProtection] = useState(true)
const [isVerifying, setIsVerifying] = useState(false)
@@ -64,16 +98,13 @@ export function SettingsDialog({
const [accessCodeRequired, setAccessCodeRequired] = useState(
() => getStoredAccessCodeRequired() ?? false,
)
const [provider, setProvider] = useState("")
const [baseUrl, setBaseUrl] = useState("")
const [apiKey, setApiKey] = useState("")
const [modelId, setModelId] = useState("")
const [currentLang, setCurrentLang] = useState("en")
useEffect(() => {
// Only fetch if not cached in localStorage
if (getStoredAccessCodeRequired() !== null) return
fetch("/api/config")
fetch(getApiEndpoint("/api/config"))
.then((res) => {
if (!res.ok) throw new Error(`HTTP ${res.status}`)
return res.json()
@@ -92,6 +123,17 @@ export function SettingsDialog({
})
}, [])
// Detect current language from pathname
useEffect(() => {
const seg = pathname.split("/").filter(Boolean)
const first = seg[0]
if (first && i18n.locales.includes(first as Locale)) {
setCurrentLang(first)
} else {
setCurrentLang(i18n.defaultLocale)
}
}, [pathname])
useEffect(() => {
if (open) {
const storedCode =
@@ -104,16 +146,22 @@ export function SettingsDialog({
// Default to true if not set
setCloseProtection(storedCloseProtection !== "false")
// Load AI provider settings
setProvider(localStorage.getItem(STORAGE_AI_PROVIDER_KEY) || "")
setBaseUrl(localStorage.getItem(STORAGE_AI_BASE_URL_KEY) || "")
setApiKey(localStorage.getItem(STORAGE_AI_API_KEY_KEY) || "")
setModelId(localStorage.getItem(STORAGE_AI_MODEL_KEY) || "")
setError("")
}
}, [open])
const changeLanguage = (lang: string) => {
const parts = pathname.split("/")
if (parts.length > 1 && i18n.locales.includes(parts[1] as Locale)) {
parts[1] = lang
} else {
parts.splice(1, 0, lang)
}
const newPath = parts.join("/") || "/"
const searchStr = search?.toString() ? `?${search.toString()}` : ""
router.push(newPath + searchStr)
}
const handleSave = async () => {
if (!accessCodeRequired) return
@@ -121,12 +169,15 @@ export function SettingsDialog({
setIsVerifying(true)
try {
const response = await fetch("/api/verify-access-code", {
method: "POST",
headers: {
"x-access-code": accessCode.trim(),
const response = await fetch(
getApiEndpoint("/api/verify-access-code"),
{
method: "POST",
headers: {
"x-access-code": accessCode.trim(),
},
},
})
)
const data = await response.json()
@@ -152,20 +203,32 @@ export function SettingsDialog({
}
return (
<Dialog open={open} onOpenChange={onOpenChange}>
<DialogContent className="sm:max-w-md">
<DialogHeader>
<DialogTitle>{dict.settings.title}</DialogTitle>
<DialogDescription>
{dict.settings.description}
</DialogDescription>
</DialogHeader>
<div className="space-y-4 py-2">
<DialogContent className="sm:max-w-lg p-0 gap-0">
{/* Header */}
<DialogHeader className="px-6 pt-6 pb-4">
<DialogTitle>{dict.settings.title}</DialogTitle>
<DialogDescription className="mt-1">
{dict.settings.description}
</DialogDescription>
</DialogHeader>
{/* Content */}
<div className="px-6 pb-6">
<div className="divide-y divide-border-subtle">
{/* Access Code (conditional) */}
{accessCodeRequired && (
<div className="space-y-2">
<Label htmlFor="access-code">
{dict.settings.accessCode}
</Label>
<div className="py-4 first:pt-0 space-y-3">
<div className="space-y-0.5">
<Label
htmlFor="access-code"
className="text-sm font-medium"
>
{dict.settings.accessCode}
</Label>
<p className="text-xs text-muted-foreground">
{dict.settings.accessCodeDescription}
</p>
</div>
<div className="flex gap-2">
<Input
id="access-code"
@@ -179,222 +242,60 @@ export function SettingsDialog({
dict.settings.accessCodePlaceholder
}
autoComplete="off"
className="h-9"
/>
<Button
onClick={handleSave}
disabled={isVerifying || !accessCode.trim()}
className="h-9 px-4 rounded-xl"
>
{isVerifying ? "..." : dict.common.save}
</Button>
</div>
<p className="text-[0.8rem] text-muted-foreground">
{dict.settings.accessCodeDescription}
</p>
{error && (
<p className="text-[0.8rem] text-destructive">
<p className="text-xs text-destructive">
{error}
</p>
)}
</div>
)}
<div className="space-y-2">
<Label>{dict.settings.aiProvider}</Label>
<p className="text-[0.8rem] text-muted-foreground">
{dict.settings.aiProviderDescription}
</p>
<div className="space-y-3 pt-2">
<div className="space-y-2">
<Label htmlFor="ai-provider">
{dict.settings.provider}
</Label>
<Select
value={provider || "default"}
onValueChange={(value) => {
const actualValue =
value === "default" ? "" : value
setProvider(actualValue)
localStorage.setItem(
STORAGE_AI_PROVIDER_KEY,
actualValue,
)
}}
>
<SelectTrigger id="ai-provider">
<SelectValue
placeholder={
dict.settings.useServerDefault
}
/>
</SelectTrigger>
<SelectContent>
<SelectItem value="default">
{dict.settings.useServerDefault}
</SelectItem>
<SelectItem value="openai">
{dict.providers.openai}
</SelectItem>
<SelectItem value="anthropic">
{dict.providers.anthropic}
</SelectItem>
<SelectItem value="google">
{dict.providers.google}
</SelectItem>
<SelectItem value="azure">
{dict.providers.azure}
</SelectItem>
<SelectItem value="openrouter">
{dict.providers.openrouter}
</SelectItem>
<SelectItem value="deepseek">
{dict.providers.deepseek}
</SelectItem>
<SelectItem value="siliconflow">
{dict.providers.siliconflow}
</SelectItem>
</SelectContent>
</Select>
</div>
{provider && provider !== "default" && (
<>
<div className="space-y-2">
<Label htmlFor="ai-model">
{dict.settings.modelId}
</Label>
<Input
id="ai-model"
value={modelId}
onChange={(e) => {
setModelId(e.target.value)
localStorage.setItem(
STORAGE_AI_MODEL_KEY,
e.target.value,
)
}}
placeholder={
provider === "openai"
? "e.g., gpt-4o"
: provider === "anthropic"
? "e.g., claude-sonnet-4-5"
: provider === "google"
? "e.g., gemini-2.0-flash-exp"
: provider ===
"deepseek"
? "e.g., deepseek-chat"
: dict.settings
.modelId
}
/>
</div>
<div className="space-y-2">
<Label htmlFor="ai-api-key">
{dict.settings.apiKey}
</Label>
<Input
id="ai-api-key"
type="password"
value={apiKey}
onChange={(e) => {
setApiKey(e.target.value)
localStorage.setItem(
STORAGE_AI_API_KEY_KEY,
e.target.value,
)
}}
placeholder={
dict.settings.apiKeyPlaceholder
}
autoComplete="off"
/>
<p className="text-[0.8rem] text-muted-foreground">
{dict.settings.overrides}{" "}
{provider === "openai"
? "OPENAI_API_KEY"
: provider === "anthropic"
? "ANTHROPIC_API_KEY"
: provider === "google"
? "GOOGLE_GENERATIVE_AI_API_KEY"
: provider === "azure"
? "AZURE_API_KEY"
: provider ===
"openrouter"
? "OPENROUTER_API_KEY"
: provider ===
"deepseek"
? "DEEPSEEK_API_KEY"
: provider ===
"siliconflow"
? "SILICONFLOW_API_KEY"
: "server API key"}
</p>
</div>
<div className="space-y-2">
<Label htmlFor="ai-base-url">
{dict.settings.baseUrl}
</Label>
<Input
id="ai-base-url"
value={baseUrl}
onChange={(e) => {
setBaseUrl(e.target.value)
localStorage.setItem(
STORAGE_AI_BASE_URL_KEY,
e.target.value,
)
}}
placeholder={
provider === "anthropic"
? "https://api.anthropic.com/v1"
: provider === "siliconflow"
? "https://api.siliconflow.com/v1"
: dict.settings
.customEndpoint
}
/>
</div>
<Button
variant="outline"
size="sm"
className="w-full"
onClick={() => {
localStorage.removeItem(
STORAGE_AI_PROVIDER_KEY,
)
localStorage.removeItem(
STORAGE_AI_BASE_URL_KEY,
)
localStorage.removeItem(
STORAGE_AI_API_KEY_KEY,
)
localStorage.removeItem(
STORAGE_AI_MODEL_KEY,
)
setProvider("")
setBaseUrl("")
setApiKey("")
setModelId("")
}}
>
{dict.settings.clearSettings}
</Button>
</>
)}
</div>
</div>
<div className="flex items-center justify-between">
<div className="space-y-0.5">
<Label htmlFor="theme-toggle">
{dict.settings.theme}
</Label>
<p className="text-[0.8rem] text-muted-foreground">
{dict.settings.themeDescription}
</p>
</div>
{/* Language */}
<SettingItem
label={dict.settings.language}
description={dict.settings.languageDescription}
>
<Select
value={currentLang}
onValueChange={changeLanguage}
>
<SelectTrigger
id="language-select"
className="w-[120px] h-9 rounded-xl"
>
<SelectValue />
</SelectTrigger>
<SelectContent>
{i18n.locales.map((locale) => (
<SelectItem key={locale} value={locale}>
{LANGUAGE_LABELS[locale]}
</SelectItem>
))}
</SelectContent>
</Select>
</SettingItem>
{/* Theme */}
<SettingItem
label={dict.settings.theme}
description={dict.settings.themeDescription}
>
<Button
id="theme-toggle"
variant="outline"
size="icon"
onClick={onToggleDarkMode}
className="h-9 w-9 rounded-xl border-border-subtle hover:bg-interactive-hover"
>
{darkMode ? (
<Sun className="h-4 w-4" />
@@ -402,42 +303,35 @@ export function SettingsDialog({
<Moon className="h-4 w-4" />
)}
</Button>
</div>
</SettingItem>
<div className="flex items-center justify-between">
<div className="space-y-0.5">
<Label htmlFor="drawio-ui">
{dict.settings.drawioStyle}
</Label>
<p className="text-[0.8rem] text-muted-foreground">
{dict.settings.drawioStyleDescription}{" "}
{drawioUi === "min"
? dict.settings.minimal
: dict.settings.sketch}
</p>
</div>
{/* Draw.io Style */}
<SettingItem
label={dict.settings.drawioStyle}
description={`${dict.settings.drawioStyleDescription} ${
drawioUi === "min"
? dict.settings.minimal
: dict.settings.sketch
}`}
>
<Button
id="drawio-ui"
variant="outline"
size="sm"
onClick={onToggleDrawioUi}
className="h-9 w-[120px] rounded-xl border-border-subtle hover:bg-interactive-hover font-normal"
>
{dict.settings.switchTo}{" "}
{drawioUi === "min"
? dict.settings.sketch
: dict.settings.minimal}
</Button>
</div>
</SettingItem>
<div className="flex items-center justify-between">
<div className="space-y-0.5">
<Label htmlFor="close-protection">
{dict.settings.closeProtection}
</Label>
<p className="text-[0.8rem] text-muted-foreground">
{dict.settings.closeProtectionDescription}
</p>
</div>
{/* Close Protection */}
<SettingItem
label={dict.settings.closeProtection}
description={dict.settings.closeProtectionDescription}
>
<Switch
id="close-protection"
checked={closeProtection}
@@ -450,14 +344,34 @@ export function SettingsDialog({
onCloseProtectionChange?.(checked)
}}
/>
</div>
</SettingItem>
</div>
<div className="pt-4 border-t border-border/50">
<p className="text-[0.75rem] text-muted-foreground text-center">
Version {process.env.APP_VERSION}
</p>
</div>
</DialogContent>
</div>
{/* Footer */}
<div className="px-6 py-4 border-t border-border-subtle bg-surface-1/50 rounded-b-2xl">
<p className="text-xs text-muted-foreground text-center">
Version {process.env.APP_VERSION}
</p>
</div>
</DialogContent>
)
}
export function SettingsDialog(props: SettingsDialogProps) {
return (
<Dialog open={props.open} onOpenChange={props.onOpenChange}>
<Suspense
fallback={
<DialogContent className="sm:max-w-lg p-0">
<div className="h-80 flex items-center justify-center">
<div className="animate-spin h-6 w-6 border-2 border-primary border-t-transparent rounded-full" />
</div>
</DialogContent>
}
>
<SettingsContent {...props} />
</Suspense>
</Dialog>
)
}

View File

@@ -0,0 +1,157 @@
"use client"
import * as React from "react"
import * as AlertDialogPrimitive from "@radix-ui/react-alert-dialog"
import { cn } from "@/lib/utils"
import { buttonVariants } from "@/components/ui/button"
function AlertDialog({
...props
}: React.ComponentProps<typeof AlertDialogPrimitive.Root>) {
return <AlertDialogPrimitive.Root data-slot="alert-dialog" {...props} />
}
function AlertDialogTrigger({
...props
}: React.ComponentProps<typeof AlertDialogPrimitive.Trigger>) {
return (
<AlertDialogPrimitive.Trigger data-slot="alert-dialog-trigger" {...props} />
)
}
function AlertDialogPortal({
...props
}: React.ComponentProps<typeof AlertDialogPrimitive.Portal>) {
return (
<AlertDialogPrimitive.Portal data-slot="alert-dialog-portal" {...props} />
)
}
function AlertDialogOverlay({
className,
...props
}: React.ComponentProps<typeof AlertDialogPrimitive.Overlay>) {
return (
<AlertDialogPrimitive.Overlay
data-slot="alert-dialog-overlay"
className={cn(
"data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 fixed inset-0 z-50 bg-black/50",
className
)}
{...props}
/>
)
}
function AlertDialogContent({
className,
...props
}: React.ComponentProps<typeof AlertDialogPrimitive.Content>) {
return (
<AlertDialogPortal>
<AlertDialogOverlay />
<AlertDialogPrimitive.Content
data-slot="alert-dialog-content"
className={cn(
"bg-background data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 fixed top-[50%] left-[50%] z-50 grid w-full max-w-[calc(100%-2rem)] translate-x-[-50%] translate-y-[-50%] gap-4 rounded-lg border p-6 shadow-lg duration-200 sm:max-w-lg",
className
)}
{...props}
/>
</AlertDialogPortal>
)
}
function AlertDialogHeader({
className,
...props
}: React.ComponentProps<"div">) {
return (
<div
data-slot="alert-dialog-header"
className={cn("flex flex-col gap-2 text-center sm:text-left", className)}
{...props}
/>
)
}
function AlertDialogFooter({
className,
...props
}: React.ComponentProps<"div">) {
return (
<div
data-slot="alert-dialog-footer"
className={cn(
"flex flex-col-reverse gap-2 sm:flex-row sm:justify-end",
className
)}
{...props}
/>
)
}
function AlertDialogTitle({
className,
...props
}: React.ComponentProps<typeof AlertDialogPrimitive.Title>) {
return (
<AlertDialogPrimitive.Title
data-slot="alert-dialog-title"
className={cn("text-lg font-semibold", className)}
{...props}
/>
)
}
function AlertDialogDescription({
className,
...props
}: React.ComponentProps<typeof AlertDialogPrimitive.Description>) {
return (
<AlertDialogPrimitive.Description
data-slot="alert-dialog-description"
className={cn("text-muted-foreground text-sm", className)}
{...props}
/>
)
}
function AlertDialogAction({
className,
...props
}: React.ComponentProps<typeof AlertDialogPrimitive.Action>) {
return (
<AlertDialogPrimitive.Action
className={cn(buttonVariants(), className)}
{...props}
/>
)
}
function AlertDialogCancel({
className,
...props
}: React.ComponentProps<typeof AlertDialogPrimitive.Cancel>) {
return (
<AlertDialogPrimitive.Cancel
className={cn(buttonVariants({ variant: "outline" }), className)}
{...props}
/>
)
}
export {
AlertDialog,
AlertDialogPortal,
AlertDialogOverlay,
AlertDialogTrigger,
AlertDialogContent,
AlertDialogHeader,
AlertDialogFooter,
AlertDialogTitle,
AlertDialogDescription,
AlertDialogAction,
AlertDialogCancel,
}

191
components/ui/command.tsx Normal file
View File

@@ -0,0 +1,191 @@
"use client"
import * as React from "react"
import { Command as CommandPrimitive } from "cmdk"
import { SearchIcon } from "lucide-react"
import { cn } from "@/lib/utils"
import {
Dialog,
DialogContent,
DialogDescription,
DialogHeader,
DialogTitle,
} from "@/components/ui/dialog"
function Command({
className,
...props
}: React.ComponentProps<typeof CommandPrimitive>) {
return (
<CommandPrimitive
data-slot="command"
className={cn(
"bg-popover text-popover-foreground flex h-full w-full flex-col overflow-hidden rounded-md",
className
)}
{...props}
/>
)
}
function CommandDialog({
title = "Command Palette",
description = "Search for a command to run...",
children,
className,
...props
}: React.ComponentProps<typeof Dialog> & {
title?: string
description?: string
className?: string
}) {
return (
<Dialog {...props}>
<DialogHeader className="sr-only">
<DialogTitle>{title}</DialogTitle>
<DialogDescription>{description}</DialogDescription>
</DialogHeader>
<DialogContent className={cn("overflow-hidden p-0", className)}>
<Command className="[&_[cmdk-group-heading]]:text-muted-foreground **:data-[slot=command-input-wrapper]:h-12 [&_[cmdk-group-heading]]:px-2 [&_[cmdk-group-heading]]:font-medium [&_[cmdk-group]]:px-2 [&_[cmdk-group]:not([hidden])_~[cmdk-group]]:pt-0 [&_[cmdk-input-wrapper]_svg]:h-5 [&_[cmdk-input-wrapper]_svg]:w-5 [&_[cmdk-input]]:h-12 [&_[cmdk-item]]:px-2 [&_[cmdk-item]]:py-3 [&_[cmdk-item]_svg]:h-5 [&_[cmdk-item]_svg]:w-5">
{children}
</Command>
</DialogContent>
</Dialog>
)
}
function CommandInput({
className,
...props
}: React.ComponentProps<typeof CommandPrimitive.Input>) {
return (
<div
data-slot="command-input-wrapper"
className="flex h-9 items-center gap-2 border-b px-3"
>
<SearchIcon className="size-4 shrink-0 opacity-50" />
<CommandPrimitive.Input
data-slot="command-input"
className={cn(
"placeholder:text-muted-foreground flex h-10 w-full rounded-md bg-transparent py-3 text-sm outline-hidden disabled:cursor-not-allowed disabled:opacity-50",
className
)}
{...props}
/>
</div>
)
}
function CommandList({
className,
...props
}: React.ComponentProps<typeof CommandPrimitive.List>) {
return (
<CommandPrimitive.List
data-slot="command-list"
className={cn(
"max-h-[300px] scroll-py-1 overflow-x-hidden overflow-y-auto",
className
)}
{...props}
/>
)
}
function CommandEmpty({
...props
}: React.ComponentProps<typeof CommandPrimitive.Empty>) {
return (
<CommandPrimitive.Empty
data-slot="command-empty"
className="py-6 text-center text-sm"
{...props}
/>
)
}
function CommandGroup({
className,
...props
}: React.ComponentProps<typeof CommandPrimitive.Group>) {
return (
<CommandPrimitive.Group
data-slot="command-group"
className={cn(
"text-foreground [&_[cmdk-group-heading]]:text-muted-foreground overflow-hidden p-1 [&_[cmdk-group-heading]]:px-2 [&_[cmdk-group-heading]]:py-1.5 [&_[cmdk-group-heading]]:text-xs [&_[cmdk-group-heading]]:font-medium",
className
)}
{...props}
/>
)
}
function CommandSeparator({
className,
...props
}: React.ComponentProps<typeof CommandPrimitive.Separator>) {
return (
<CommandPrimitive.Separator
data-slot="command-separator"
className={cn("bg-border -mx-1 h-px", className)}
{...props}
/>
)
}
function CommandItem({
className,
...props
}: React.ComponentProps<typeof CommandPrimitive.Item>) {
return (
<CommandPrimitive.Item
data-slot="command-item"
className={cn(
"data-[selected=true]:bg-accent data-[selected=true]:text-accent-foreground [&_svg:not([class*='text-'])]:text-muted-foreground relative flex cursor-default items-center gap-2 rounded-sm px-2 py-1.5 text-sm outline-hidden select-none data-[disabled=true]:pointer-events-none data-[disabled=true]:opacity-50 [&_svg]:pointer-events-none [&_svg]:shrink-0 [&_svg:not([class*='size-'])]:size-4",
className
)}
onMouseEnter={(e) => {
// Ensure hover updates selection for visual feedback
const item = e.currentTarget
item.setAttribute("data-selected", "true")
// Deselect siblings
const siblings = item.parentElement?.querySelectorAll("[cmdk-item]")
siblings?.forEach((sibling) => {
if (sibling !== item) {
sibling.setAttribute("data-selected", "false")
}
})
}}
{...props}
/>
)
}
function CommandShortcut({
className,
...props
}: React.ComponentProps<"span">) {
return (
<span
data-slot="command-shortcut"
className={cn(
"text-muted-foreground ml-auto text-xs tracking-widest",
className
)}
{...props}
/>
)
}
export {
Command,
CommandDialog,
CommandInput,
CommandList,
CommandEmpty,
CommandGroup,
CommandItem,
CommandShortcut,
CommandSeparator,
}

View File

@@ -38,7 +38,10 @@ function DialogOverlay({
<DialogPrimitive.Overlay
data-slot="dialog-overlay"
className={cn(
"data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 fixed inset-0 z-50 bg-black/50",
"fixed inset-0 z-50 bg-black/40 backdrop-blur-[2px]",
"data-[state=open]:animate-in data-[state=closed]:animate-out",
"data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0",
"duration-200",
className
)}
{...props}
@@ -57,13 +60,32 @@ function DialogContent({
<DialogPrimitive.Content
data-slot="dialog-content"
className={cn(
"bg-background data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 fixed top-[50%] left-[50%] z-50 grid w-full max-w-[calc(100%-2rem)] translate-x-[-50%] translate-y-[-50%] gap-4 rounded-lg border p-6 shadow-lg duration-200 sm:max-w-lg",
// Base styles
"fixed top-[50%] left-[50%] z-50 w-full",
"max-w-[calc(100%-2rem)] translate-x-[-50%] translate-y-[-50%]",
"grid gap-4 p-6",
// Refined visual treatment
"bg-surface-0 rounded-2xl border border-border-subtle shadow-dialog",
// Entry/exit animations
"data-[state=open]:animate-in data-[state=closed]:animate-out",
"data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0",
"data-[state=closed]:zoom-out-[0.98] data-[state=open]:zoom-in-[0.98]",
"data-[state=closed]:slide-out-to-top-[2%] data-[state=open]:slide-in-from-top-[2%]",
"duration-200 sm:max-w-lg",
className
)}
{...props}
>
{children}
<DialogPrimitive.Close className="ring-offset-background focus:ring-ring data-[state=open]:bg-accent data-[state=open]:text-muted-foreground absolute top-4 right-4 rounded-xs opacity-70 transition-opacity hover:opacity-100 focus:ring-2 focus:ring-offset-2 focus:outline-hidden disabled:pointer-events-none [&_svg]:pointer-events-none [&_svg]:shrink-0 [&_svg:not([class*='size-'])]:size-4">
<DialogPrimitive.Close className={cn(
"absolute top-4 right-4 rounded-xl p-1.5",
"text-muted-foreground/60 hover:text-foreground",
"hover:bg-interactive-hover",
"transition-all duration-150",
"focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2",
"disabled:pointer-events-none",
"[&_svg]:pointer-events-none [&_svg]:shrink-0 [&_svg]:size-4"
)}>
<XIcon />
<span className="sr-only">Close</span>
</DialogPrimitive.Close>
@@ -102,7 +124,10 @@ function DialogTitle({
return (
<DialogPrimitive.Title
data-slot="dialog-title"
className={cn("text-lg leading-none font-semibold", className)}
className={cn(
"text-xl font-semibold tracking-tight leading-tight",
className
)}
{...props}
/>
)
@@ -115,7 +140,10 @@ function DialogDescription({
return (
<DialogPrimitive.Description
data-slot="dialog-description"
className={cn("text-muted-foreground text-sm", className)}
className={cn(
"text-sm text-muted-foreground leading-relaxed",
className
)}
{...props}
/>
)

View File

@@ -8,9 +8,30 @@ function Input({ className, type, ...props }: React.ComponentProps<"input">) {
type={type}
data-slot="input"
className={cn(
"file:text-foreground placeholder:text-muted-foreground selection:bg-primary selection:text-primary-foreground dark:bg-input/30 border-input flex h-9 w-full min-w-0 rounded-md border bg-transparent px-3 py-1 text-base shadow-xs transition-[color,box-shadow] outline-none file:inline-flex file:h-7 file:border-0 file:bg-transparent file:text-sm file:font-medium disabled:pointer-events-none disabled:cursor-not-allowed disabled:opacity-50 md:text-sm",
"focus-visible:border-ring focus-visible:ring-ring/50 focus-visible:ring-[3px]",
"aria-invalid:ring-destructive/20 dark:aria-invalid:ring-destructive/40 aria-invalid:border-destructive",
// Base styles
"flex h-10 w-full min-w-0 rounded-xl px-3.5 py-2",
"border border-border-subtle bg-surface-1",
"text-sm text-foreground",
// Placeholder
"placeholder:text-muted-foreground/60",
// Selection
"selection:bg-primary selection:text-primary-foreground",
// Transitions
"transition-all duration-150 ease-out",
// Hover state
"hover:border-border-default",
// Focus state - refined ring
"focus:outline-none focus:border-primary focus:ring-2 focus:ring-primary/10",
// File input
"file:text-foreground file:inline-flex file:h-7 file:border-0",
"file:bg-transparent file:text-sm file:font-medium",
// Disabled
"disabled:pointer-events-none disabled:cursor-not-allowed disabled:opacity-50",
// Invalid state
"aria-invalid:border-destructive aria-invalid:ring-destructive/20",
"dark:aria-invalid:ring-destructive/40",
// Dark mode background
"dark:bg-surface-1",
className
)}
{...props}

48
components/ui/popover.tsx Normal file
View File

@@ -0,0 +1,48 @@
"use client"
import * as React from "react"
import * as PopoverPrimitive from "@radix-ui/react-popover"
import { cn } from "@/lib/utils"
function Popover({
...props
}: React.ComponentProps<typeof PopoverPrimitive.Root>) {
return <PopoverPrimitive.Root data-slot="popover" {...props} />
}
function PopoverTrigger({
...props
}: React.ComponentProps<typeof PopoverPrimitive.Trigger>) {
return <PopoverPrimitive.Trigger data-slot="popover-trigger" {...props} />
}
function PopoverContent({
className,
align = "center",
sideOffset = 4,
...props
}: React.ComponentProps<typeof PopoverPrimitive.Content>) {
return (
<PopoverPrimitive.Portal>
<PopoverPrimitive.Content
data-slot="popover-content"
align={align}
sideOffset={sideOffset}
className={cn(
"bg-popover text-popover-foreground data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2 z-50 w-72 origin-(--radix-popover-content-transform-origin) rounded-md border p-4 shadow-md outline-hidden",
className
)}
{...props}
/>
</PopoverPrimitive.Portal>
)
}
function PopoverAnchor({
...props
}: React.ComponentProps<typeof PopoverPrimitive.Anchor>) {
return <PopoverPrimitive.Anchor data-slot="popover-anchor" {...props} />
}
export { Popover, PopoverTrigger, PopoverContent, PopoverAnchor }

View File

@@ -5,6 +5,7 @@ import { createContext, useContext, useEffect, useRef, useState } from "react"
import type { DrawIoEmbedRef } from "react-drawio"
import { STORAGE_DIAGRAM_XML_KEY } from "@/components/chat-panel"
import type { ExportFormat } from "@/components/save-dialog"
import { getApiEndpoint } from "@/lib/base-path"
import { extractDiagramXML, validateAndFixXml } from "../lib/utils"
interface DiagramContextType {
@@ -329,7 +330,7 @@ export function DiagramProvider({ children }: { children: React.ReactNode }) {
sessionId?: string,
) => {
try {
await fetch("/api/log-save", {
await fetch(getApiEndpoint("/api/log-save"), {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ filename, format, sessionId }),

View File

@@ -7,6 +7,11 @@ services:
context: .
args:
- NEXT_PUBLIC_DRAWIO_BASE_URL=http://localhost:8080
# Uncomment below for subdirectory deployment
# - NEXT_PUBLIC_BASE_PATH=/nextaidrawio
ports: ["3000:3000"]
env_file: .env
environment:
# For subdirectory deployment, uncomment and set your path:
# NEXT_PUBLIC_BASE_PATH: /nextaidrawio
depends_on: [drawio]

47
electron.d.ts vendored
View File

@@ -2,29 +2,6 @@
* Type declarations for Electron API exposed via preload script
*/
/** Configuration preset interface */
interface ConfigPreset {
id: string
name: string
createdAt: number
updatedAt: number
config: {
AI_PROVIDER?: string
AI_MODEL?: string
AI_API_KEY?: string
AI_BASE_URL?: string
TEMPERATURE?: string
[key: string]: string | undefined
}
}
/** Result of applying a preset */
interface ApplyPresetResult {
success: boolean
error?: string
env?: Record<string, string>
}
declare global {
interface Window {
/** Main window Electron API */
@@ -46,29 +23,7 @@ declare global {
/** Save data to file via save dialog */
saveFile: (data: string) => Promise<boolean>
}
/** Settings window Electron API */
settingsAPI?: {
/** Get all configuration presets */
getPresets: () => Promise<ConfigPreset[]>
/** Get current preset ID */
getCurrentPresetId: () => Promise<string | null>
/** Get current preset */
getCurrentPreset: () => Promise<ConfigPreset | null>
/** Save (create or update) a preset */
savePreset: (preset: {
id?: string
name: string
config: Record<string, string | undefined>
}) => Promise<ConfigPreset>
/** Delete a preset */
deletePreset: (id: string) => Promise<boolean>
/** Apply a preset (sets environment variables and restarts server) */
applyPreset: (id: string) => Promise<ApplyPresetResult>
/** Close settings window */
close: () => void
}
}
}
export { ConfigPreset, ApplyPresetResult }
export {}

View File

@@ -1,19 +1,4 @@
import {
app,
BrowserWindow,
dialog,
Menu,
type MenuItemConstructorOptions,
shell,
} from "electron"
import {
applyPresetToEnv,
getAllPresets,
getCurrentPresetId,
setCurrentPreset,
} from "./config-manager"
import { restartNextServer } from "./next-server"
import { showSettingsWindow } from "./settings-window"
import { app, Menu, type MenuItemConstructorOptions, shell } from "electron"
/**
* Build and set the application menu
@@ -24,13 +9,6 @@ export function buildAppMenu(): void {
Menu.setApplicationMenu(menu)
}
/**
* Rebuild the menu (call this when presets change)
*/
export function rebuildAppMenu(): void {
buildAppMenu()
}
/**
* Get the menu template
*/
@@ -46,15 +24,6 @@ function getMenuTemplate(): MenuItemConstructorOptions[] {
submenu: [
{ role: "about" },
{ type: "separator" },
{
label: "Settings...",
accelerator: "CmdOrCtrl+,",
click: () => {
const win = BrowserWindow.getFocusedWindow()
showSettingsWindow(win || undefined)
},
},
{ type: "separator" },
{ role: "services" },
{ type: "separator" },
{ role: "hide" },
@@ -69,22 +38,7 @@ function getMenuTemplate(): MenuItemConstructorOptions[] {
// File menu
template.push({
label: "File",
submenu: [
...(isMac
? []
: [
{
label: "Settings",
accelerator: "CmdOrCtrl+,",
click: () => {
const win = BrowserWindow.getFocusedWindow()
showSettingsWindow(win || undefined)
},
},
{ type: "separator" } as MenuItemConstructorOptions,
]),
isMac ? { role: "close" } : { role: "quit" },
],
submenu: [isMac ? { role: "close" } : { role: "quit" }],
})
// Edit menu
@@ -129,9 +83,6 @@ function getMenuTemplate(): MenuItemConstructorOptions[] {
],
})
// Configuration menu with presets
template.push(buildConfigMenu())
// Window menu
template.push({
label: "Window",
@@ -172,70 +123,3 @@ function getMenuTemplate(): MenuItemConstructorOptions[] {
return template
}
/**
* Build the Configuration menu with presets
*/
function buildConfigMenu(): MenuItemConstructorOptions {
const presets = getAllPresets()
const currentPresetId = getCurrentPresetId()
const presetItems: MenuItemConstructorOptions[] = presets.map((preset) => ({
label: preset.name,
type: "radio",
checked: preset.id === currentPresetId,
click: async () => {
const previousPresetId = getCurrentPresetId()
const env = applyPresetToEnv(preset.id)
if (env) {
try {
await restartNextServer()
rebuildAppMenu() // Rebuild menu to update checkmarks
} catch (error) {
console.error("Failed to restart server:", error)
// Revert to previous preset on failure
if (previousPresetId) {
applyPresetToEnv(previousPresetId)
} else {
setCurrentPreset(null)
}
// Rebuild menu to restore previous checkmark state
rebuildAppMenu()
// Show error dialog to notify user
dialog.showErrorBox(
"Configuration Error",
`Failed to apply preset "${preset.name}". The server could not be restarted.\n\nThe previous configuration has been restored.\n\nError: ${error instanceof Error ? error.message : String(error)}`,
)
}
}
},
}))
return {
label: "Configuration",
submenu: [
...(presetItems.length > 0
? [
{ label: "Switch Preset", enabled: false },
{ type: "separator" } as MenuItemConstructorOptions,
...presetItems,
{ type: "separator" } as MenuItemConstructorOptions,
]
: []),
{
label:
presetItems.length > 0
? "Manage Presets..."
: "Add Configuration Preset...",
click: () => {
const win = BrowserWindow.getFocusedWindow()
showSettingsWindow(win || undefined)
},
},
],
}
}

View File

@@ -1,460 +0,0 @@
import { randomUUID } from "node:crypto"
import { existsSync, mkdirSync, readFileSync, writeFileSync } from "node:fs"
import path from "node:path"
import { app, safeStorage } from "electron"
/**
* Fields that contain sensitive data and should be encrypted
*/
const SENSITIVE_FIELDS = ["AI_API_KEY"] as const
/**
* Prefix to identify encrypted values
*/
const ENCRYPTED_PREFIX = "encrypted:"
/**
* Check if safeStorage encryption is available
*/
function isEncryptionAvailable(): boolean {
return safeStorage.isEncryptionAvailable()
}
/**
* Track if we've already warned about plaintext storage
*/
let hasWarnedAboutPlaintext = false
/**
* Encrypt a sensitive value using safeStorage
* Warns if encryption is not available (API key stored in plaintext)
*/
function encryptValue(value: string): string {
if (!value) {
return value
}
if (!isEncryptionAvailable()) {
if (!hasWarnedAboutPlaintext) {
console.warn(
"⚠️ SECURITY WARNING: safeStorage not available. " +
"API keys will be stored in PLAINTEXT. " +
"On Linux, install gnome-keyring or similar for secure storage.",
)
hasWarnedAboutPlaintext = true
}
return value
}
try {
const encrypted = safeStorage.encryptString(value)
return ENCRYPTED_PREFIX + encrypted.toString("base64")
} catch (error) {
console.error("Encryption failed:", error)
// Fail secure: don't store if encryption fails
throw new Error(
"Failed to encrypt API key. Cannot securely store credentials.",
)
}
}
/**
* Decrypt a sensitive value using safeStorage
* Returns the original value if it's not encrypted or decryption fails
*/
function decryptValue(value: string): string {
if (!value || !value.startsWith(ENCRYPTED_PREFIX)) {
return value
}
if (!isEncryptionAvailable()) {
console.warn(
"Cannot decrypt value: safeStorage encryption is not available",
)
return value
}
try {
const base64Data = value.slice(ENCRYPTED_PREFIX.length)
const buffer = Buffer.from(base64Data, "base64")
return safeStorage.decryptString(buffer)
} catch (error) {
console.error("Failed to decrypt value:", error)
return value
}
}
/**
* Encrypt sensitive fields in a config object
*/
function encryptConfig(
config: Record<string, string | undefined>,
): Record<string, string | undefined> {
const encrypted = { ...config }
for (const field of SENSITIVE_FIELDS) {
if (encrypted[field]) {
encrypted[field] = encryptValue(encrypted[field] as string)
}
}
return encrypted
}
/**
* Decrypt sensitive fields in a config object
*/
function decryptConfig(
config: Record<string, string | undefined>,
): Record<string, string | undefined> {
const decrypted = { ...config }
for (const field of SENSITIVE_FIELDS) {
if (decrypted[field]) {
decrypted[field] = decryptValue(decrypted[field] as string)
}
}
return decrypted
}
/**
* Configuration preset interface
*/
export interface ConfigPreset {
id: string
name: string
createdAt: number
updatedAt: number
config: {
AI_PROVIDER?: string
AI_MODEL?: string
AI_API_KEY?: string
AI_BASE_URL?: string
TEMPERATURE?: string
[key: string]: string | undefined
}
}
/**
* Configuration file structure
*/
interface ConfigPresetsFile {
version: 1
currentPresetId: string | null
presets: ConfigPreset[]
}
const CONFIG_FILE_NAME = "config-presets.json"
/**
* Get the path to the config file
*/
function getConfigFilePath(): string {
const userDataPath = app.getPath("userData")
return path.join(userDataPath, CONFIG_FILE_NAME)
}
/**
* Load presets from the config file
* Decrypts sensitive fields automatically
*/
export function loadPresets(): ConfigPresetsFile {
const configPath = getConfigFilePath()
if (!existsSync(configPath)) {
return {
version: 1,
currentPresetId: null,
presets: [],
}
}
try {
const content = readFileSync(configPath, "utf-8")
const data = JSON.parse(content) as ConfigPresetsFile
// Decrypt sensitive fields in each preset
data.presets = data.presets.map((preset) => ({
...preset,
config: decryptConfig(preset.config) as ConfigPreset["config"],
}))
return data
} catch (error) {
console.error("Failed to load config presets:", error)
return {
version: 1,
currentPresetId: null,
presets: [],
}
}
}
/**
* Save presets to the config file
* Encrypts sensitive fields automatically
*/
export function savePresets(data: ConfigPresetsFile): void {
const configPath = getConfigFilePath()
const userDataPath = app.getPath("userData")
// Ensure the directory exists
if (!existsSync(userDataPath)) {
mkdirSync(userDataPath, { recursive: true })
}
// Encrypt sensitive fields before saving
const dataToSave: ConfigPresetsFile = {
...data,
presets: data.presets.map((preset) => ({
...preset,
config: encryptConfig(preset.config) as ConfigPreset["config"],
})),
}
try {
writeFileSync(configPath, JSON.stringify(dataToSave, null, 2), "utf-8")
} catch (error) {
console.error("Failed to save config presets:", error)
throw error
}
}
/**
* Get all presets
*/
export function getAllPresets(): ConfigPreset[] {
const data = loadPresets()
return data.presets
}
/**
* Get current preset ID
*/
export function getCurrentPresetId(): string | null {
const data = loadPresets()
return data.currentPresetId
}
/**
* Get current preset
*/
export function getCurrentPreset(): ConfigPreset | null {
const data = loadPresets()
if (!data.currentPresetId) {
return null
}
return data.presets.find((p) => p.id === data.currentPresetId) || null
}
/**
* Create a new preset
*/
export function createPreset(
preset: Omit<ConfigPreset, "id" | "createdAt" | "updatedAt">,
): ConfigPreset {
const data = loadPresets()
const now = Date.now()
const newPreset: ConfigPreset = {
id: randomUUID(),
name: preset.name,
config: preset.config,
createdAt: now,
updatedAt: now,
}
data.presets.push(newPreset)
savePresets(data)
return newPreset
}
/**
* Update an existing preset
*/
export function updatePreset(
id: string,
updates: Partial<Omit<ConfigPreset, "id" | "createdAt">>,
): ConfigPreset | null {
const data = loadPresets()
const index = data.presets.findIndex((p) => p.id === id)
if (index === -1) {
return null
}
const updatedPreset: ConfigPreset = {
...data.presets[index],
...updates,
updatedAt: Date.now(),
}
data.presets[index] = updatedPreset
savePresets(data)
return updatedPreset
}
/**
* Delete a preset
*/
export function deletePreset(id: string): boolean {
const data = loadPresets()
const index = data.presets.findIndex((p) => p.id === id)
if (index === -1) {
return false
}
data.presets.splice(index, 1)
// Clear current preset if it was deleted
if (data.currentPresetId === id) {
data.currentPresetId = null
}
savePresets(data)
return true
}
/**
* Set the current preset
*/
export function setCurrentPreset(id: string | null): boolean {
const data = loadPresets()
if (id !== null) {
const preset = data.presets.find((p) => p.id === id)
if (!preset) {
return false
}
}
data.currentPresetId = id
savePresets(data)
return true
}
/**
* Map generic AI_API_KEY and AI_BASE_URL to provider-specific environment variables
*/
const PROVIDER_ENV_MAP: Record<string, { apiKey: string; baseUrl: string }> = {
openai: { apiKey: "OPENAI_API_KEY", baseUrl: "OPENAI_BASE_URL" },
anthropic: { apiKey: "ANTHROPIC_API_KEY", baseUrl: "ANTHROPIC_BASE_URL" },
google: {
apiKey: "GOOGLE_GENERATIVE_AI_API_KEY",
baseUrl: "GOOGLE_BASE_URL",
},
azure: { apiKey: "AZURE_API_KEY", baseUrl: "AZURE_BASE_URL" },
openrouter: {
apiKey: "OPENROUTER_API_KEY",
baseUrl: "OPENROUTER_BASE_URL",
},
deepseek: { apiKey: "DEEPSEEK_API_KEY", baseUrl: "DEEPSEEK_BASE_URL" },
siliconflow: {
apiKey: "SILICONFLOW_API_KEY",
baseUrl: "SILICONFLOW_BASE_URL",
},
gateway: { apiKey: "AI_GATEWAY_API_KEY", baseUrl: "AI_GATEWAY_BASE_URL" },
// bedrock and ollama don't use API keys in the same way
bedrock: { apiKey: "", baseUrl: "" },
ollama: { apiKey: "", baseUrl: "OLLAMA_BASE_URL" },
}
/**
* Apply preset environment variables to the current process
* Returns the environment variables that were applied
*/
export function applyPresetToEnv(id: string): Record<string, string> | null {
const data = loadPresets()
const preset = data.presets.find((p) => p.id === id)
if (!preset) {
return null
}
const appliedEnv: Record<string, string> = {}
const provider = preset.config.AI_PROVIDER?.toLowerCase()
for (const [key, value] of Object.entries(preset.config)) {
if (value !== undefined && value !== "") {
// Map generic AI_API_KEY to provider-specific key
if (
key === "AI_API_KEY" &&
provider &&
PROVIDER_ENV_MAP[provider]
) {
const providerApiKey = PROVIDER_ENV_MAP[provider].apiKey
if (providerApiKey) {
process.env[providerApiKey] = value
appliedEnv[providerApiKey] = value
}
}
// Map generic AI_BASE_URL to provider-specific key
else if (
key === "AI_BASE_URL" &&
provider &&
PROVIDER_ENV_MAP[provider]
) {
const providerBaseUrl = PROVIDER_ENV_MAP[provider].baseUrl
if (providerBaseUrl) {
process.env[providerBaseUrl] = value
appliedEnv[providerBaseUrl] = value
}
}
// Apply other env vars directly
else {
process.env[key] = value
appliedEnv[key] = value
}
}
}
// Set as current preset
data.currentPresetId = id
savePresets(data)
return appliedEnv
}
/**
* Get environment variables from current preset
* Maps generic AI_API_KEY/AI_BASE_URL to provider-specific keys
*/
export function getCurrentPresetEnv(): Record<string, string> {
const preset = getCurrentPreset()
if (!preset) {
return {}
}
const env: Record<string, string> = {}
const provider = preset.config.AI_PROVIDER?.toLowerCase()
for (const [key, value] of Object.entries(preset.config)) {
if (value !== undefined && value !== "") {
// Map generic AI_API_KEY to provider-specific key
if (
key === "AI_API_KEY" &&
provider &&
PROVIDER_ENV_MAP[provider]
) {
const providerApiKey = PROVIDER_ENV_MAP[provider].apiKey
if (providerApiKey) {
env[providerApiKey] = value
}
}
// Map generic AI_BASE_URL to provider-specific key
else if (
key === "AI_BASE_URL" &&
provider &&
PROVIDER_ENV_MAP[provider]
) {
const providerBaseUrl = PROVIDER_ENV_MAP[provider].baseUrl
if (providerBaseUrl) {
env[providerBaseUrl] = value
}
}
// Apply other env vars directly
else {
env[key] = value
}
}
}
return env
}

View File

@@ -1,10 +1,8 @@
import { app, BrowserWindow, dialog, shell } from "electron"
import { buildAppMenu } from "./app-menu"
import { getCurrentPresetEnv } from "./config-manager"
import { loadEnvFile } from "./env-loader"
import { registerIpcHandlers } from "./ipc-handlers"
import { startNextServer, stopNextServer } from "./next-server"
import { registerSettingsWindowHandlers } from "./settings-window"
import { createWindow, getMainWindow } from "./window-manager"
// Single instance lock
@@ -24,19 +22,12 @@ if (!gotTheLock) {
// Load environment variables from .env files
loadEnvFile()
// Apply saved preset environment variables (overrides .env)
const presetEnv = getCurrentPresetEnv()
for (const [key, value] of Object.entries(presetEnv)) {
process.env[key] = value
}
const isDev = process.env.NODE_ENV === "development"
let serverUrl: string | null = null
app.whenReady().then(async () => {
// Register IPC handlers
registerIpcHandlers()
registerSettingsWindowHandlers()
// Build application menu
buildAppMenu()

View File

@@ -1,43 +1,4 @@
import { app, BrowserWindow, dialog, ipcMain } from "electron"
import {
applyPresetToEnv,
type ConfigPreset,
createPreset,
deletePreset,
getAllPresets,
getCurrentPreset,
getCurrentPresetId,
setCurrentPreset,
updatePreset,
} from "./config-manager"
import { restartNextServer } from "./next-server"
/**
* Allowed configuration keys for presets
* This whitelist prevents arbitrary environment variable injection
*/
const ALLOWED_CONFIG_KEYS = new Set([
"AI_PROVIDER",
"AI_MODEL",
"AI_API_KEY",
"AI_BASE_URL",
"TEMPERATURE",
])
/**
* Sanitize preset config to only include allowed keys
*/
function sanitizePresetConfig(
config: Record<string, string | undefined>,
): Record<string, string | undefined> {
const sanitized: Record<string, string | undefined> = {}
for (const key of ALLOWED_CONFIG_KEYS) {
if (key in config && typeof config[key] === "string") {
sanitized[key] = config[key]
}
}
return sanitized
}
/**
* Register all IPC handlers
@@ -123,90 +84,4 @@ export function registerIpcHandlers(): void {
return false
}
})
// ==================== Config Presets ====================
ipcMain.handle("config-presets:get-all", () => {
return getAllPresets()
})
ipcMain.handle("config-presets:get-current", () => {
return getCurrentPreset()
})
ipcMain.handle("config-presets:get-current-id", () => {
return getCurrentPresetId()
})
ipcMain.handle(
"config-presets:save",
(
_event,
preset: Omit<ConfigPreset, "id" | "createdAt" | "updatedAt"> & {
id?: string
},
) => {
// Validate preset name
if (typeof preset.name !== "string" || !preset.name.trim()) {
throw new Error("Invalid preset name")
}
// Sanitize config to only allow whitelisted keys
const sanitizedConfig = sanitizePresetConfig(preset.config ?? {})
if (preset.id) {
// Update existing preset
return updatePreset(preset.id, {
name: preset.name.trim(),
config: sanitizedConfig,
})
}
// Create new preset
return createPreset({
name: preset.name.trim(),
config: sanitizedConfig,
})
},
)
ipcMain.handle("config-presets:delete", (_event, id: string) => {
return deletePreset(id)
})
ipcMain.handle("config-presets:apply", async (_event, id: string) => {
const env = applyPresetToEnv(id)
if (!env) {
return { success: false, error: "Preset not found" }
}
const isDev = process.env.NODE_ENV === "development"
if (isDev) {
// In development mode, the config file change will trigger
// the file watcher in electron-dev.mjs to restart Next.js
// We just need to save the preset (already done in applyPresetToEnv)
return { success: true, env, devMode: true }
}
// Production mode: restart the Next.js server to apply new environment variables
try {
await restartNextServer()
return { success: true, env }
} catch (error) {
return {
success: false,
error:
error instanceof Error
? error.message
: "Failed to restart server",
}
}
})
ipcMain.handle(
"config-presets:set-current",
(_event, id: string | null) => {
return setCurrentPreset(id)
},
)
}

View File

@@ -1,78 +0,0 @@
import path from "node:path"
import { app, BrowserWindow, ipcMain } from "electron"
let settingsWindow: BrowserWindow | null = null
/**
* Create and show the settings window
*/
export function showSettingsWindow(parentWindow?: BrowserWindow): void {
// If settings window already exists, focus it
if (settingsWindow && !settingsWindow.isDestroyed()) {
settingsWindow.focus()
return
}
// Determine path to settings preload script
// In compiled output: dist-electron/preload/settings.js
const preloadPath = path.join(__dirname, "..", "preload", "settings.js")
// Determine path to settings HTML
// In packaged app: app.asar/dist-electron/settings/index.html
// In development: electron/settings/index.html
const settingsHtmlPath = app.isPackaged
? path.join(__dirname, "..", "settings", "index.html")
: path.join(__dirname, "..", "..", "electron", "settings", "index.html")
settingsWindow = new BrowserWindow({
width: 600,
height: 700,
minWidth: 500,
minHeight: 500,
parent: parentWindow,
modal: false,
show: false,
title: "Settings - Next AI Draw.io",
webPreferences: {
preload: preloadPath,
contextIsolation: true,
nodeIntegration: false,
sandbox: true,
},
})
settingsWindow.loadFile(settingsHtmlPath)
settingsWindow.once("ready-to-show", () => {
settingsWindow?.show()
})
settingsWindow.on("closed", () => {
settingsWindow = null
})
}
/**
* Close the settings window if it exists
*/
export function closeSettingsWindow(): void {
if (settingsWindow && !settingsWindow.isDestroyed()) {
settingsWindow.close()
settingsWindow = null
}
}
/**
* Check if settings window is open
*/
export function isSettingsWindowOpen(): boolean {
return settingsWindow !== null && !settingsWindow.isDestroyed()
}
/**
* Register settings window IPC handlers
*/
export function registerSettingsWindowHandlers(): void {
ipcMain.on("settings:close", () => {
closeSettingsWindow()
})
}

View File

@@ -1,35 +0,0 @@
/**
* Preload script for settings window
* Exposes APIs for managing configuration presets
*/
import { contextBridge, ipcRenderer } from "electron"
// Expose settings API to the renderer process
contextBridge.exposeInMainWorld("settingsAPI", {
// Get all presets
getPresets: () => ipcRenderer.invoke("config-presets:get-all"),
// Get current preset ID
getCurrentPresetId: () =>
ipcRenderer.invoke("config-presets:get-current-id"),
// Get current preset
getCurrentPreset: () => ipcRenderer.invoke("config-presets:get-current"),
// Save (create or update) a preset
savePreset: (preset: {
id?: string
name: string
config: Record<string, string | undefined>
}) => ipcRenderer.invoke("config-presets:save", preset),
// Delete a preset
deletePreset: (id: string) =>
ipcRenderer.invoke("config-presets:delete", id),
// Apply a preset (sets environment variables and restarts server)
applyPreset: (id: string) => ipcRenderer.invoke("config-presets:apply", id),
// Close settings window
close: () => ipcRenderer.send("settings:close"),
})

View File

@@ -1,110 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self'; style-src 'self';">
<title>Settings - Next AI Draw.io</title>
<link rel="stylesheet" href="./settings.css">
</head>
<body>
<div class="container">
<h1>Configuration Presets</h1>
<div class="section">
<h2>Presets</h2>
<div id="preset-list" class="preset-list">
<!-- Presets will be loaded here -->
</div>
<button id="add-preset-btn" class="btn btn-primary">
+ Add New Preset
</button>
</div>
</div>
<!-- Add/Edit Preset Modal -->
<div id="preset-modal" class="modal-overlay">
<div class="modal">
<div class="modal-header">
<h3 id="modal-title">Add Preset</h3>
</div>
<div class="modal-body">
<form id="preset-form">
<input type="hidden" id="preset-id">
<div class="form-group">
<label for="preset-name">Preset Name *</label>
<input type="text" id="preset-name" required placeholder="e.g., Work, Personal, Testing">
</div>
<div class="form-group">
<label for="ai-provider">AI Provider</label>
<select id="ai-provider">
<option value="">-- Select Provider --</option>
<option value="openai">OpenAI</option>
<option value="anthropic">Anthropic (Claude)</option>
<option value="google">Google AI (Gemini)</option>
<option value="azure">Azure OpenAI</option>
<option value="bedrock">AWS Bedrock</option>
<option value="openrouter">OpenRouter</option>
<option value="deepseek">DeepSeek</option>
<option value="siliconflow">SiliconFlow</option>
<option value="ollama">Ollama (Local)</option>
</select>
</div>
<div class="form-group">
<label for="ai-model">Model ID</label>
<input type="text" id="ai-model" placeholder="e.g., gpt-4o, claude-sonnet-4-5">
<div class="hint">The model identifier to use with the selected provider</div>
</div>
<div class="form-group">
<label for="ai-api-key">API Key</label>
<input type="password" id="ai-api-key" placeholder="Your API key">
<div class="hint">This will be stored locally on your device</div>
</div>
<div class="form-group">
<label for="ai-base-url">Base URL (Optional)</label>
<input type="text" id="ai-base-url" placeholder="https://api.example.com/v1">
<div class="hint">Custom API endpoint URL</div>
</div>
<div class="form-group">
<label for="temperature">Temperature (Optional)</label>
<input type="text" id="temperature" placeholder="0.7">
<div class="hint">Controls randomness (0.0 - 2.0)</div>
</div>
</form>
</div>
<div class="modal-footer">
<button type="button" id="cancel-btn" class="btn btn-secondary">Cancel</button>
<button type="button" id="save-btn" class="btn btn-primary">Save</button>
</div>
</div>
</div>
<!-- Delete Confirmation Modal -->
<div id="delete-modal" class="modal-overlay">
<div class="modal">
<div class="modal-header">
<h3>Delete Preset</h3>
</div>
<div class="modal-body">
<p>Are you sure you want to delete "<span id="delete-preset-name"></span>"?</p>
<p class="delete-warning">This action cannot be undone.</p>
</div>
<div class="modal-footer">
<button type="button" id="delete-cancel-btn" class="btn btn-secondary">Cancel</button>
<button type="button" id="delete-confirm-btn" class="btn btn-danger">Delete</button>
</div>
</div>
</div>
<!-- Toast notification -->
<div id="toast" class="toast"></div>
<script src="./settings.js"></script>
</body>
</html>

View File

@@ -1,311 +0,0 @@
:root {
--bg-primary: #ffffff;
--bg-secondary: #f5f5f5;
--bg-hover: #e8e8e8;
--text-primary: #1a1a1a;
--text-secondary: #666666;
--border-color: #e0e0e0;
--accent-color: #0066cc;
--accent-hover: #0052a3;
--danger-color: #dc3545;
--success-color: #28a745;
}
@media (prefers-color-scheme: dark) {
:root {
--bg-primary: #1a1a1a;
--bg-secondary: #2d2d2d;
--bg-hover: #3d3d3d;
--text-primary: #ffffff;
--text-secondary: #a0a0a0;
--border-color: #404040;
--accent-color: #4da6ff;
--accent-hover: #66b3ff;
}
}
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family:
-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen, Ubuntu,
sans-serif;
background-color: var(--bg-primary);
color: var(--text-primary);
line-height: 1.5;
}
.container {
max-width: 560px;
margin: 0 auto;
padding: 24px;
}
h1 {
font-size: 24px;
font-weight: 600;
margin-bottom: 24px;
padding-bottom: 16px;
border-bottom: 1px solid var(--border-color);
}
h2 {
font-size: 16px;
font-weight: 600;
margin-bottom: 16px;
color: var(--text-secondary);
}
.section {
margin-bottom: 32px;
}
.preset-list {
display: flex;
flex-direction: column;
gap: 12px;
margin-bottom: 16px;
}
.preset-card {
background: var(--bg-secondary);
border: 1px solid var(--border-color);
border-radius: 8px;
padding: 16px;
cursor: pointer;
transition: all 0.2s ease;
}
.preset-card:hover {
background: var(--bg-hover);
}
.preset-card.active {
border-color: var(--accent-color);
box-shadow: 0 0 0 1px var(--accent-color);
}
.preset-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 8px;
}
.preset-name {
font-weight: 600;
font-size: 15px;
}
.preset-badge {
background: var(--accent-color);
color: white;
font-size: 11px;
padding: 2px 8px;
border-radius: 10px;
}
.preset-info {
font-size: 13px;
color: var(--text-secondary);
}
.preset-actions {
display: flex;
gap: 8px;
margin-top: 12px;
}
.btn {
padding: 8px 16px;
border: none;
border-radius: 6px;
font-size: 14px;
cursor: pointer;
transition: all 0.2s ease;
font-weight: 500;
}
.btn-primary {
background: var(--accent-color);
color: white;
}
.btn-primary:hover {
background: var(--accent-hover);
}
.btn-secondary {
background: var(--bg-secondary);
color: var(--text-primary);
border: 1px solid var(--border-color);
}
.btn-secondary:hover {
background: var(--bg-hover);
}
.btn-danger {
background: var(--danger-color);
color: white;
}
.btn-danger:hover {
opacity: 0.9;
}
.btn-sm {
padding: 6px 12px;
font-size: 13px;
}
.empty-state {
text-align: center;
padding: 40px 20px;
color: var(--text-secondary);
}
.empty-state p {
margin-bottom: 16px;
}
/* Modal */
.modal-overlay {
display: none;
position: fixed;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: rgba(0, 0, 0, 0.5);
z-index: 100;
align-items: center;
justify-content: center;
}
.modal-overlay.show {
display: flex;
}
.modal {
background: var(--bg-primary);
border-radius: 12px;
width: 90%;
max-width: 480px;
max-height: 90vh;
overflow-y: auto;
box-shadow: 0 4px 20px rgba(0, 0, 0, 0.2);
}
.modal-header {
padding: 20px 24px;
border-bottom: 1px solid var(--border-color);
}
.modal-header h3 {
font-size: 18px;
font-weight: 600;
}
.modal-body {
padding: 24px;
}
.modal-footer {
padding: 16px 24px;
border-top: 1px solid var(--border-color);
display: flex;
justify-content: flex-end;
gap: 12px;
}
.form-group {
margin-bottom: 20px;
}
.form-group label {
display: block;
font-size: 14px;
font-weight: 500;
margin-bottom: 6px;
}
.form-group input,
.form-group select {
width: 100%;
padding: 10px 12px;
border: 1px solid var(--border-color);
border-radius: 6px;
font-size: 14px;
background: var(--bg-primary);
color: var(--text-primary);
}
.form-group input:focus,
.form-group select:focus {
outline: none;
border-color: var(--accent-color);
box-shadow: 0 0 0 2px rgba(0, 102, 204, 0.2);
}
.form-group .hint {
font-size: 12px;
color: var(--text-secondary);
margin-top: 4px;
}
.loading {
display: inline-block;
width: 16px;
height: 16px;
border: 2px solid var(--border-color);
border-top-color: var(--accent-color);
border-radius: 50%;
animation: spin 0.8s linear infinite;
}
@keyframes spin {
to {
transform: rotate(360deg);
}
}
.toast {
position: fixed;
bottom: 24px;
left: 50%;
transform: translateX(-50%);
background: var(--text-primary);
color: var(--bg-primary);
padding: 12px 24px;
border-radius: 8px;
font-size: 14px;
z-index: 200;
opacity: 0;
transition: opacity 0.3s ease;
}
.toast.show {
opacity: 1;
}
.toast.success {
background: var(--success-color);
color: white;
}
.toast.error {
background: var(--danger-color);
color: white;
}
/* Inline style replacements */
.delete-warning {
color: var(--text-secondary);
margin-top: 8px;
font-size: 14px;
}

View File

@@ -1,311 +0,0 @@
// Settings page JavaScript
// This file handles the UI interactions for the settings window
let presets = []
let currentPresetId = null
let editingPresetId = null
let deletingPresetId = null
// DOM Elements
const presetList = document.getElementById("preset-list")
const addPresetBtn = document.getElementById("add-preset-btn")
const presetModal = document.getElementById("preset-modal")
const deleteModal = document.getElementById("delete-modal")
const presetForm = document.getElementById("preset-form")
const modalTitle = document.getElementById("modal-title")
const toast = document.getElementById("toast")
// Form fields
const presetIdField = document.getElementById("preset-id")
const presetNameField = document.getElementById("preset-name")
const aiProviderField = document.getElementById("ai-provider")
const aiModelField = document.getElementById("ai-model")
const aiApiKeyField = document.getElementById("ai-api-key")
const aiBaseUrlField = document.getElementById("ai-base-url")
const temperatureField = document.getElementById("temperature")
// Buttons
const cancelBtn = document.getElementById("cancel-btn")
const saveBtn = document.getElementById("save-btn")
const deleteCancelBtn = document.getElementById("delete-cancel-btn")
const deleteConfirmBtn = document.getElementById("delete-confirm-btn")
// Initialize
document.addEventListener("DOMContentLoaded", async () => {
await loadPresets()
setupEventListeners()
})
// Load presets from main process
async function loadPresets() {
try {
presets = await window.settingsAPI.getPresets()
currentPresetId = await window.settingsAPI.getCurrentPresetId()
renderPresets()
} catch (error) {
console.error("Failed to load presets:", error)
showToast("Failed to load presets", "error")
}
}
// Render presets list
function renderPresets() {
if (presets.length === 0) {
presetList.innerHTML = `
<div class="empty-state">
<p>No presets configured yet.</p>
<p>Add a preset to quickly switch between different AI configurations.</p>
</div>
`
return
}
presetList.innerHTML = presets
.map((preset) => {
const isActive = preset.id === currentPresetId
const providerLabel = getProviderLabel(preset.config.AI_PROVIDER)
return `
<div class="preset-card ${isActive ? "active" : ""}" data-id="${preset.id}">
<div class="preset-header">
<span class="preset-name">${escapeHtml(preset.name)}</span>
${isActive ? '<span class="preset-badge">Active</span>' : ""}
</div>
<div class="preset-info">
${providerLabel ? `Provider: ${providerLabel}` : "No provider configured"}
${preset.config.AI_MODEL ? ` • Model: ${escapeHtml(preset.config.AI_MODEL)}` : ""}
</div>
<div class="preset-actions">
${!isActive ? `<button class="btn btn-primary btn-sm apply-btn" data-id="${preset.id}">Apply</button>` : ""}
<button class="btn btn-secondary btn-sm edit-btn" data-id="${preset.id}">Edit</button>
<button class="btn btn-secondary btn-sm delete-btn" data-id="${preset.id}">Delete</button>
</div>
</div>
`
})
.join("")
// Add event listeners to buttons
presetList.querySelectorAll(".apply-btn").forEach((btn) => {
btn.addEventListener("click", (e) => {
e.stopPropagation()
applyPreset(btn.dataset.id)
})
})
presetList.querySelectorAll(".edit-btn").forEach((btn) => {
btn.addEventListener("click", (e) => {
e.stopPropagation()
openEditModal(btn.dataset.id)
})
})
presetList.querySelectorAll(".delete-btn").forEach((btn) => {
btn.addEventListener("click", (e) => {
e.stopPropagation()
openDeleteModal(btn.dataset.id)
})
})
}
// Setup event listeners
function setupEventListeners() {
addPresetBtn.addEventListener("click", () => openAddModal())
cancelBtn.addEventListener("click", () => closeModal())
saveBtn.addEventListener("click", () => savePreset())
deleteCancelBtn.addEventListener("click", () => closeDeleteModal())
deleteConfirmBtn.addEventListener("click", () => confirmDelete())
// Close modal on overlay click
presetModal.addEventListener("click", (e) => {
if (e.target === presetModal) closeModal()
})
deleteModal.addEventListener("click", (e) => {
if (e.target === deleteModal) closeDeleteModal()
})
// Handle Enter key in form
presetForm.addEventListener("keydown", (e) => {
if (e.key === "Enter") {
e.preventDefault()
savePreset()
}
})
}
// Open add modal
function openAddModal() {
editingPresetId = null
modalTitle.textContent = "Add Preset"
presetForm.reset()
presetIdField.value = ""
presetModal.classList.add("show")
presetNameField.focus()
}
// Open edit modal
function openEditModal(id) {
const preset = presets.find((p) => p.id === id)
if (!preset) return
editingPresetId = id
modalTitle.textContent = "Edit Preset"
presetIdField.value = preset.id
presetNameField.value = preset.name
aiProviderField.value = preset.config.AI_PROVIDER || ""
aiModelField.value = preset.config.AI_MODEL || ""
aiApiKeyField.value = preset.config.AI_API_KEY || ""
aiBaseUrlField.value = preset.config.AI_BASE_URL || ""
temperatureField.value = preset.config.TEMPERATURE || ""
presetModal.classList.add("show")
presetNameField.focus()
}
// Close modal
function closeModal() {
presetModal.classList.remove("show")
editingPresetId = null
}
// Open delete modal
function openDeleteModal(id) {
const preset = presets.find((p) => p.id === id)
if (!preset) return
deletingPresetId = id
document.getElementById("delete-preset-name").textContent = preset.name
deleteModal.classList.add("show")
}
// Close delete modal
function closeDeleteModal() {
deleteModal.classList.remove("show")
deletingPresetId = null
}
// Save preset
async function savePreset() {
const name = presetNameField.value.trim()
if (!name) {
showToast("Please enter a preset name", "error")
presetNameField.focus()
return
}
const preset = {
id: editingPresetId || undefined,
name: name,
config: {
AI_PROVIDER: aiProviderField.value || undefined,
AI_MODEL: aiModelField.value.trim() || undefined,
AI_API_KEY: aiApiKeyField.value.trim() || undefined,
AI_BASE_URL: aiBaseUrlField.value.trim() || undefined,
TEMPERATURE: temperatureField.value.trim() || undefined,
},
}
// Remove undefined values
Object.keys(preset.config).forEach((key) => {
if (preset.config[key] === undefined) {
delete preset.config[key]
}
})
try {
saveBtn.disabled = true
saveBtn.innerHTML = '<span class="loading"></span>'
await window.settingsAPI.savePreset(preset)
await loadPresets()
closeModal()
showToast(
editingPresetId ? "Preset updated" : "Preset created",
"success",
)
} catch (error) {
console.error("Failed to save preset:", error)
showToast("Failed to save preset", "error")
} finally {
saveBtn.disabled = false
saveBtn.textContent = "Save"
}
}
// Confirm delete
async function confirmDelete() {
if (!deletingPresetId) return
try {
deleteConfirmBtn.disabled = true
deleteConfirmBtn.innerHTML = '<span class="loading"></span>'
await window.settingsAPI.deletePreset(deletingPresetId)
await loadPresets()
closeDeleteModal()
showToast("Preset deleted", "success")
} catch (error) {
console.error("Failed to delete preset:", error)
showToast("Failed to delete preset", "error")
} finally {
deleteConfirmBtn.disabled = false
deleteConfirmBtn.textContent = "Delete"
}
}
// Apply preset
async function applyPreset(id) {
try {
const btn = presetList.querySelector(`.apply-btn[data-id="${id}"]`)
if (btn) {
btn.disabled = true
btn.innerHTML = '<span class="loading"></span>'
}
const result = await window.settingsAPI.applyPreset(id)
if (result.success) {
currentPresetId = id
renderPresets()
showToast("Preset applied, server restarting...", "success")
} else {
showToast(result.error || "Failed to apply preset", "error")
}
} catch (error) {
console.error("Failed to apply preset:", error)
showToast("Failed to apply preset", "error")
}
}
// Get provider display label
function getProviderLabel(provider) {
const labels = {
openai: "OpenAI",
anthropic: "Anthropic",
google: "Google AI",
azure: "Azure OpenAI",
bedrock: "AWS Bedrock",
openrouter: "OpenRouter",
deepseek: "DeepSeek",
siliconflow: "SiliconFlow",
ollama: "Ollama",
}
return labels[provider] || provider
}
// Show toast notification
function showToast(message, type = "") {
toast.textContent = message
toast.className = "toast show" + (type ? ` ${type}` : "")
setTimeout(() => {
toast.classList.remove("show")
}, 3000)
}
// Escape HTML to prevent XSS
function escapeHtml(text) {
const div = document.createElement("div")
div.textContent = text
return div.innerHTML
}

View File

@@ -68,6 +68,10 @@ AI_MODEL=global.anthropic.claude-sonnet-4-5-20250929-v1:0
# SILICONFLOW_API_KEY=sk-...
# SILICONFLOW_BASE_URL=https://api.siliconflow.com/v1 # Optional: switch to https://api.siliconflow.cn/v1 if needed
# SGLang Configuration (OpenAI-compatible)
# SGLANG_API_KEY=your-sglang-api-key
# SGLANG_BASE_URL=http://127.0.0.1:8000/v1 # Your SGLang endpoint
# Vercel AI Gateway Configuration
# Get your API key from: https://vercel.com/ai-gateway
# Model format: "provider/model" e.g., "openai/gpt-4o", "anthropic/claude-sonnet-4-5"
@@ -93,6 +97,12 @@ AI_MODEL=global.anthropic.claude-sonnet-4-5-20250929-v1:0
# NEXT_PUBLIC_DRAWIO_BASE_URL=https://embed.diagrams.net # Default: https://embed.diagrams.net
# Use this to point to a self-hosted draw.io instance
# Subdirectory Deployment (Optional)
# For deploying to a subdirectory (e.g., https://example.com/nextaidrawio)
# Set this to your subdirectory path with leading slash (e.g., /nextaidrawio)
# Leave empty for root deployment (default)
# NEXT_PUBLIC_BASE_PATH=/nextaidrawio
# PDF Input Feature (Optional)
# Enable PDF file upload to extract text and generate diagrams
# Enabled by default. Set to "false" to disable.

View File

@@ -0,0 +1,383 @@
import type { MutableRefObject } from "react"
import { isMxCellXmlComplete, wrapWithMxFile } from "@/lib/utils"
const DEBUG = process.env.NODE_ENV === "development"
interface ToolCall {
toolCallId: string
toolName: string
input: unknown
}
type AddToolOutputSuccess = {
tool: string
toolCallId: string
state?: "output-available"
output: string
errorText?: undefined
}
type AddToolOutputError = {
tool: string
toolCallId: string
state: "output-error"
output?: undefined
errorText: string
}
type AddToolOutputParams = AddToolOutputSuccess | AddToolOutputError
type AddToolOutputFn = (params: AddToolOutputParams) => void
interface DiagramOperation {
operation: "update" | "add" | "delete"
cell_id: string
new_xml?: string
}
interface UseDiagramToolHandlersParams {
partialXmlRef: MutableRefObject<string>
editDiagramOriginalXmlRef: MutableRefObject<Map<string, string>>
chartXMLRef: MutableRefObject<string>
onDisplayChart: (xml: string, skipValidation?: boolean) => string | null
onFetchChart: (saveToHistory?: boolean) => Promise<string>
onExport: () => void
}
/**
* Hook that creates the onToolCall handler for diagram-related tools.
* Handles display_diagram, edit_diagram, and append_diagram tools.
*
* Note: addToolOutput is passed at call time (not hook init) because
* it comes from useChat which creates a circular dependency.
*/
export function useDiagramToolHandlers({
partialXmlRef,
editDiagramOriginalXmlRef,
chartXMLRef,
onDisplayChart,
onFetchChart,
onExport,
}: UseDiagramToolHandlersParams) {
const handleToolCall = async (
{ toolCall }: { toolCall: ToolCall },
addToolOutput: AddToolOutputFn,
) => {
if (DEBUG) {
console.log(
`[onToolCall] Tool: ${toolCall.toolName}, CallId: ${toolCall.toolCallId}`,
)
}
if (toolCall.toolName === "display_diagram") {
await handleDisplayDiagram(toolCall, addToolOutput)
} else if (toolCall.toolName === "edit_diagram") {
await handleEditDiagram(toolCall, addToolOutput)
} else if (toolCall.toolName === "append_diagram") {
handleAppendDiagram(toolCall, addToolOutput)
}
}
const handleDisplayDiagram = async (
toolCall: ToolCall,
addToolOutput: AddToolOutputFn,
) => {
const { xml } = toolCall.input as { xml: string }
// DEBUG: Log raw input to diagnose false truncation detection
if (DEBUG) {
console.log(
"[display_diagram] XML ending (last 100 chars):",
xml.slice(-100),
)
console.log("[display_diagram] XML length:", xml.length)
}
// Check if XML is truncated (incomplete mxCell indicates truncated output)
const isTruncated = !isMxCellXmlComplete(xml)
if (DEBUG) {
console.log("[display_diagram] isTruncated:", isTruncated)
}
if (isTruncated) {
// Store the partial XML for continuation via append_diagram
partialXmlRef.current = xml
// Tell LLM to use append_diagram to continue
const partialEnding = partialXmlRef.current.slice(-500)
addToolOutput({
tool: "display_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `Output was truncated due to length limits. Use the append_diagram tool to continue.
Your output ended with:
\`\`\`
${partialEnding}
\`\`\`
NEXT STEP: Call append_diagram with the continuation XML.
- Do NOT include wrapper tags or root cells (id="0", id="1")
- Start from EXACTLY where you stopped
- Complete all remaining mxCell elements`,
})
return
}
// Complete XML received - use it directly
// (continuation is now handled via append_diagram tool)
const finalXml = xml
partialXmlRef.current = "" // Reset any partial from previous truncation
// Wrap raw XML with full mxfile structure for draw.io
const fullXml = wrapWithMxFile(finalXml)
// loadDiagram validates and returns error if invalid
const validationError = onDisplayChart(fullXml)
if (validationError) {
console.warn("[display_diagram] Validation error:", validationError)
// Return error to model - sendAutomaticallyWhen will trigger retry
if (DEBUG) {
console.log(
"[display_diagram] Adding tool output with state: output-error",
)
}
addToolOutput({
tool: "display_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `${validationError}
Please fix the XML issues and call display_diagram again with corrected XML.
Your failed XML:
\`\`\`xml
${finalXml}
\`\`\``,
})
} else {
// Success - diagram will be rendered by chat-message-display
if (DEBUG) {
console.log(
"[display_diagram] Success! Adding tool output with state: output-available",
)
}
addToolOutput({
tool: "display_diagram",
toolCallId: toolCall.toolCallId,
output: "Successfully displayed the diagram.",
})
if (DEBUG) {
console.log(
"[display_diagram] Tool output added. Diagram should be visible now.",
)
}
}
}
const handleEditDiagram = async (
toolCall: ToolCall,
addToolOutput: AddToolOutputFn,
) => {
const { operations } = toolCall.input as {
operations: DiagramOperation[]
}
let currentXml = ""
try {
// Use the original XML captured during streaming (shared with chat-message-display)
// This ensures we apply operations to the same base XML that streaming used
const originalXml = editDiagramOriginalXmlRef.current.get(
toolCall.toolCallId,
)
if (originalXml) {
currentXml = originalXml
} else {
// Fallback: use chartXML from ref if streaming didn't capture original
const cachedXML = chartXMLRef.current
if (cachedXML) {
currentXml = cachedXML
} else {
// Last resort: export from iframe
currentXml = await onFetchChart(false)
}
}
const { applyDiagramOperations } = await import("@/lib/utils")
const { result: editedXml, errors } = applyDiagramOperations(
currentXml,
operations,
)
// Check for operation errors
if (errors.length > 0) {
const errorMessages = errors
.map(
(e) =>
`- ${e.type} on cell_id="${e.cellId}": ${e.message}`,
)
.join("\n")
addToolOutput({
tool: "edit_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `Some operations failed:\n${errorMessages}
Current diagram XML:
\`\`\`xml
${currentXml}
\`\`\`
Please check the cell IDs and retry.`,
})
// Clean up the shared original XML ref
editDiagramOriginalXmlRef.current.delete(toolCall.toolCallId)
return
}
// loadDiagram validates and returns error if invalid
const validationError = onDisplayChart(editedXml)
if (validationError) {
console.warn(
"[edit_diagram] Validation error:",
validationError,
)
addToolOutput({
tool: "edit_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `Edit produced invalid XML: ${validationError}
Current diagram XML:
\`\`\`xml
${currentXml}
\`\`\`
Please fix the operations to avoid structural issues.`,
})
// Clean up the shared original XML ref
editDiagramOriginalXmlRef.current.delete(toolCall.toolCallId)
return
}
onExport()
addToolOutput({
tool: "edit_diagram",
toolCallId: toolCall.toolCallId,
output: `Successfully applied ${operations.length} operation(s) to the diagram.`,
})
// Clean up the shared original XML ref
editDiagramOriginalXmlRef.current.delete(toolCall.toolCallId)
} catch (error) {
console.error("[edit_diagram] Failed:", error)
const errorMessage =
error instanceof Error ? error.message : String(error)
addToolOutput({
tool: "edit_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `Edit failed: ${errorMessage}
Current diagram XML:
\`\`\`xml
${currentXml || "No XML available"}
\`\`\`
Please check cell IDs and retry, or use display_diagram to regenerate.`,
})
// Clean up the shared original XML ref even on error
editDiagramOriginalXmlRef.current.delete(toolCall.toolCallId)
}
}
const handleAppendDiagram = (
toolCall: ToolCall,
addToolOutput: AddToolOutputFn,
) => {
const { xml } = toolCall.input as { xml: string }
// Detect if LLM incorrectly started fresh instead of continuing
// LLM should only output bare mxCells now, so wrapper tags indicate error
const trimmed = xml.trim()
const isFreshStart =
trimmed.startsWith("<mxGraphModel") ||
trimmed.startsWith("<root") ||
trimmed.startsWith("<mxfile") ||
trimmed.startsWith('<mxCell id="0"') ||
trimmed.startsWith('<mxCell id="1"')
if (isFreshStart) {
addToolOutput({
tool: "append_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `ERROR: You started fresh with wrapper tags. Do NOT include wrapper tags or root cells (id="0", id="1").
Continue from EXACTLY where the partial ended:
\`\`\`
${partialXmlRef.current.slice(-500)}
\`\`\`
Start your continuation with the NEXT character after where it stopped.`,
})
return
}
// Append to accumulated XML
partialXmlRef.current += xml
// Check if XML is now complete (last mxCell is complete)
const isComplete = isMxCellXmlComplete(partialXmlRef.current)
if (isComplete) {
// Wrap and display the complete diagram
const finalXml = partialXmlRef.current
partialXmlRef.current = "" // Reset
const fullXml = wrapWithMxFile(finalXml)
const validationError = onDisplayChart(fullXml)
if (validationError) {
addToolOutput({
tool: "append_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `Validation error after assembly: ${validationError}
Assembled XML:
\`\`\`xml
${finalXml.substring(0, 2000)}...
\`\`\`
Please use display_diagram with corrected XML.`,
})
} else {
addToolOutput({
tool: "append_diagram",
toolCallId: toolCall.toolCallId,
output: "Diagram assembly complete and displayed successfully.",
})
}
} else {
// Still incomplete - signal to continue
addToolOutput({
tool: "append_diagram",
toolCallId: toolCall.toolCallId,
state: "output-error",
errorText: `XML still incomplete (mxCell not closed). Call append_diagram again to continue.
Current ending:
\`\`\`
${partialXmlRef.current.slice(-500)}
\`\`\`
Continue from EXACTLY where you stopped.`,
})
}
}
return { handleToolCall }
}

373
hooks/use-model-config.ts Normal file
View File

@@ -0,0 +1,373 @@
"use client"
import { useCallback, useEffect, useState } from "react"
import { STORAGE_KEYS } from "@/lib/storage"
import {
createEmptyConfig,
createModelConfig,
createProviderConfig,
type FlattenedModel,
findModelById,
flattenModels,
type ModelConfig,
type MultiModelConfig,
PROVIDER_INFO,
type ProviderConfig,
type ProviderName,
} from "@/lib/types/model-config"
// Old storage keys for migration
const OLD_KEYS = {
aiProvider: "next-ai-draw-io-ai-provider",
aiBaseUrl: "next-ai-draw-io-ai-base-url",
aiApiKey: "next-ai-draw-io-ai-api-key",
aiModel: "next-ai-draw-io-ai-model",
}
/**
* Migrate from old single-provider format to new multi-model format
*/
function migrateOldConfig(): MultiModelConfig | null {
if (typeof window === "undefined") return null
const oldProvider = localStorage.getItem(OLD_KEYS.aiProvider)
const oldApiKey = localStorage.getItem(OLD_KEYS.aiApiKey)
const oldModel = localStorage.getItem(OLD_KEYS.aiModel)
// No old config to migrate
if (!oldProvider || !oldApiKey || !oldModel) return null
const oldBaseUrl = localStorage.getItem(OLD_KEYS.aiBaseUrl)
// Create new config from old format
const provider = createProviderConfig(oldProvider as ProviderName)
provider.apiKey = oldApiKey
if (oldBaseUrl) provider.baseUrl = oldBaseUrl
const model = createModelConfig(oldModel)
provider.models.push(model)
const config: MultiModelConfig = {
version: 1,
providers: [provider],
selectedModelId: model.id,
}
// Clear old keys after migration
localStorage.removeItem(OLD_KEYS.aiProvider)
localStorage.removeItem(OLD_KEYS.aiBaseUrl)
localStorage.removeItem(OLD_KEYS.aiApiKey)
localStorage.removeItem(OLD_KEYS.aiModel)
return config
}
/**
* Load config from localStorage
*/
function loadConfig(): MultiModelConfig {
if (typeof window === "undefined") return createEmptyConfig()
// First, check if new format exists
const stored = localStorage.getItem(STORAGE_KEYS.modelConfigs)
if (stored) {
try {
return JSON.parse(stored) as MultiModelConfig
} catch {
console.error("Failed to parse model config")
}
}
// Try migration from old format
const migrated = migrateOldConfig()
if (migrated) {
// Save migrated config
localStorage.setItem(
STORAGE_KEYS.modelConfigs,
JSON.stringify(migrated),
)
return migrated
}
return createEmptyConfig()
}
/**
* Save config to localStorage
*/
function saveConfig(config: MultiModelConfig): void {
if (typeof window === "undefined") return
localStorage.setItem(STORAGE_KEYS.modelConfigs, JSON.stringify(config))
}
export interface UseModelConfigReturn {
// State
config: MultiModelConfig
isLoaded: boolean
// Getters
models: FlattenedModel[]
selectedModel: FlattenedModel | undefined
selectedModelId: string | undefined
// Actions
setSelectedModelId: (modelId: string | undefined) => void
addProvider: (provider: ProviderName) => ProviderConfig
updateProvider: (
providerId: string,
updates: Partial<ProviderConfig>,
) => void
deleteProvider: (providerId: string) => void
addModel: (providerId: string, modelId: string) => ModelConfig
updateModel: (
providerId: string,
modelConfigId: string,
updates: Partial<ModelConfig>,
) => void
deleteModel: (providerId: string, modelConfigId: string) => void
resetConfig: () => void
}
export function useModelConfig(): UseModelConfigReturn {
const [config, setConfig] = useState<MultiModelConfig>(createEmptyConfig)
const [isLoaded, setIsLoaded] = useState(false)
// Load config on mount
useEffect(() => {
const loaded = loadConfig()
setConfig(loaded)
setIsLoaded(true)
}, [])
// Save config whenever it changes (after initial load)
useEffect(() => {
if (isLoaded) {
saveConfig(config)
}
}, [config, isLoaded])
// Derived state
const models = flattenModels(config)
const selectedModel = config.selectedModelId
? findModelById(config, config.selectedModelId)
: undefined
// Actions
const setSelectedModelId = useCallback((modelId: string | undefined) => {
setConfig((prev) => ({
...prev,
selectedModelId: modelId,
}))
}, [])
const addProvider = useCallback(
(provider: ProviderName): ProviderConfig => {
const newProvider = createProviderConfig(provider)
setConfig((prev) => ({
...prev,
providers: [...prev.providers, newProvider],
}))
return newProvider
},
[],
)
const updateProvider = useCallback(
(providerId: string, updates: Partial<ProviderConfig>) => {
setConfig((prev) => ({
...prev,
providers: prev.providers.map((p) =>
p.id === providerId ? { ...p, ...updates } : p,
),
}))
},
[],
)
const deleteProvider = useCallback((providerId: string) => {
setConfig((prev) => {
const provider = prev.providers.find((p) => p.id === providerId)
const modelIds = provider?.models.map((m) => m.id) || []
// Clear selected model if it belongs to deleted provider
const newSelectedId =
prev.selectedModelId && modelIds.includes(prev.selectedModelId)
? undefined
: prev.selectedModelId
return {
...prev,
providers: prev.providers.filter((p) => p.id !== providerId),
selectedModelId: newSelectedId,
}
})
}, [])
const addModel = useCallback(
(providerId: string, modelId: string): ModelConfig => {
const newModel = createModelConfig(modelId)
setConfig((prev) => ({
...prev,
providers: prev.providers.map((p) =>
p.id === providerId
? { ...p, models: [...p.models, newModel] }
: p,
),
}))
return newModel
},
[],
)
const updateModel = useCallback(
(
providerId: string,
modelConfigId: string,
updates: Partial<ModelConfig>,
) => {
setConfig((prev) => ({
...prev,
providers: prev.providers.map((p) =>
p.id === providerId
? {
...p,
models: p.models.map((m) =>
m.id === modelConfigId
? { ...m, ...updates }
: m,
),
}
: p,
),
}))
},
[],
)
const deleteModel = useCallback(
(providerId: string, modelConfigId: string) => {
setConfig((prev) => ({
...prev,
providers: prev.providers.map((p) =>
p.id === providerId
? {
...p,
models: p.models.filter(
(m) => m.id !== modelConfigId,
),
}
: p,
),
// Clear selected model if it was deleted
selectedModelId:
prev.selectedModelId === modelConfigId
? undefined
: prev.selectedModelId,
}))
},
[],
)
const resetConfig = useCallback(() => {
setConfig(createEmptyConfig())
}, [])
return {
config,
isLoaded,
models,
selectedModel,
selectedModelId: config.selectedModelId,
setSelectedModelId,
addProvider,
updateProvider,
deleteProvider,
addModel,
updateModel,
deleteModel,
resetConfig,
}
}
/**
* Get the AI config for the currently selected model.
* Returns format compatible with existing getAIConfig() usage.
*/
export function getSelectedAIConfig(): {
accessCode: string
aiProvider: string
aiBaseUrl: string
aiApiKey: string
aiModel: string
// AWS Bedrock credentials
awsAccessKeyId: string
awsSecretAccessKey: string
awsRegion: string
awsSessionToken: string
} {
const empty = {
accessCode: "",
aiProvider: "",
aiBaseUrl: "",
aiApiKey: "",
aiModel: "",
awsAccessKeyId: "",
awsSecretAccessKey: "",
awsRegion: "",
awsSessionToken: "",
}
if (typeof window === "undefined") return empty
// Get access code (separate from model config)
const accessCode = localStorage.getItem(STORAGE_KEYS.accessCode) || ""
// Load multi-model config
const stored = localStorage.getItem(STORAGE_KEYS.modelConfigs)
if (!stored) {
// Fallback to old format for backward compatibility
return {
accessCode,
aiProvider: localStorage.getItem(OLD_KEYS.aiProvider) || "",
aiBaseUrl: localStorage.getItem(OLD_KEYS.aiBaseUrl) || "",
aiApiKey: localStorage.getItem(OLD_KEYS.aiApiKey) || "",
aiModel: localStorage.getItem(OLD_KEYS.aiModel) || "",
// Old format didn't support AWS
awsAccessKeyId: "",
awsSecretAccessKey: "",
awsRegion: "",
awsSessionToken: "",
}
}
let config: MultiModelConfig
try {
config = JSON.parse(stored)
} catch {
return { ...empty, accessCode }
}
// No selected model = use server default
if (!config.selectedModelId) {
return { ...empty, accessCode }
}
// Find selected model
const model = findModelById(config, config.selectedModelId)
if (!model) {
return { ...empty, accessCode }
}
return {
accessCode,
aiProvider: model.provider,
aiBaseUrl: model.baseUrl || "",
aiApiKey: model.apiKey,
aiModel: model.modelId,
// AWS Bedrock credentials
awsAccessKeyId: model.awsAccessKeyId || "",
awsSecretAccessKey: model.awsSecretAccessKey || "",
awsRegion: model.awsRegion || "",
awsSessionToken: model.awsSessionToken || "",
}
}

View File

@@ -14,19 +14,14 @@ export function register() {
publicKey: process.env.LANGFUSE_PUBLIC_KEY,
secretKey: process.env.LANGFUSE_SECRET_KEY,
baseUrl: process.env.LANGFUSE_BASEURL,
// Filter out Next.js HTTP request spans so AI SDK spans become root traces
// Whitelist approach: only export AI-related spans
shouldExportSpan: ({ otelSpan }) => {
const spanName = otelSpan.name
// Skip Next.js HTTP infrastructure spans
if (
spanName.startsWith("POST /") ||
spanName.startsWith("GET /") ||
spanName.includes("BaseServer") ||
spanName.includes("handleRequest")
) {
return false
// Only export AI SDK spans (ai.*) and our explicit "chat" wrapper
if (spanName === "chat" || spanName.startsWith("ai.")) {
return true
}
return true
return false
},
})
@@ -36,4 +31,5 @@ export function register() {
// Register globally so AI SDK's telemetry also uses this processor
tracerProvider.register()
console.log("[Langfuse] Instrumentation initialized successfully")
}

View File

@@ -19,6 +19,7 @@ export type ProviderName =
| "openrouter"
| "deepseek"
| "siliconflow"
| "sglang"
| "gateway"
interface ModelConfig {
@@ -33,6 +34,11 @@ export interface ClientOverrides {
baseUrl?: string | null
apiKey?: string | null
modelId?: string | null
// AWS Bedrock credentials
awsAccessKeyId?: string | null
awsSecretAccessKey?: string | null
awsRegion?: string | null
awsSessionToken?: string | null
}
// Providers that can be used with client-provided API keys
@@ -41,9 +47,11 @@ const ALLOWED_CLIENT_PROVIDERS: ProviderName[] = [
"anthropic",
"google",
"azure",
"bedrock",
"openrouter",
"deepseek",
"siliconflow",
"sglang",
"gateway",
]
@@ -87,8 +95,8 @@ function parseIntSafe(
* Supports various AI SDK providers with their unique configuration options
*
* Environment variables:
* - OPENAI_REASONING_EFFORT: OpenAI reasoning effort level (minimal/low/medium/high) - for o1/o3/gpt-5
* - OPENAI_REASONING_SUMMARY: OpenAI reasoning summary (none/brief/detailed) - auto-enabled for o1/o3/gpt-5
* - OPENAI_REASONING_EFFORT: OpenAI reasoning effort level (minimal/low/medium/high) - for o1/o3/o4/gpt-5
* - OPENAI_REASONING_SUMMARY: OpenAI reasoning summary (auto/detailed) - auto-enabled for o1/o3/o4/gpt-5
* - ANTHROPIC_THINKING_BUDGET_TOKENS: Anthropic thinking budget in tokens (1024-64000)
* - ANTHROPIC_THINKING_TYPE: Anthropic thinking type (enabled)
* - GOOGLE_THINKING_BUDGET: Google Gemini 2.5 thinking budget in tokens (1024-100000)
@@ -110,18 +118,19 @@ function buildProviderOptions(
const reasoningEffort = process.env.OPENAI_REASONING_EFFORT
const reasoningSummary = process.env.OPENAI_REASONING_SUMMARY
// OpenAI reasoning models (o1, o3, gpt-5) need reasoningSummary to return thoughts
// OpenAI reasoning models (o1, o3, o4, gpt-5) need reasoningSummary to return thoughts
if (
modelId &&
(modelId.includes("o1") ||
modelId.includes("o3") ||
modelId.includes("o4") ||
modelId.includes("gpt-5"))
) {
options.openai = {
// Auto-enable reasoning summary for reasoning models (default: detailed)
// Auto-enable reasoning summary for reasoning models
// Use 'auto' as default since not all models support 'detailed'
reasoningSummary:
(reasoningSummary as "none" | "brief" | "detailed") ||
"detailed",
(reasoningSummary as "auto" | "detailed") || "auto",
}
// Optionally configure reasoning effort
@@ -144,8 +153,7 @@ function buildProviderOptions(
}
if (reasoningSummary) {
options.openai.reasoningSummary = reasoningSummary as
| "none"
| "brief"
| "auto"
| "detailed"
}
}
@@ -337,6 +345,7 @@ function buildProviderOptions(
case "deepseek":
case "openrouter":
case "siliconflow":
case "sglang":
case "gateway": {
// These providers don't have reasoning configs in AI SDK yet
// Gateway passes through to underlying providers which handle their own configs
@@ -361,6 +370,7 @@ const PROVIDER_ENV_VARS: Record<ProviderName, string | null> = {
openrouter: "OPENROUTER_API_KEY",
deepseek: "DEEPSEEK_API_KEY",
siliconflow: "SILICONFLOW_API_KEY",
sglang: "SGLANG_API_KEY",
gateway: "AI_GATEWAY_API_KEY",
}
@@ -426,7 +436,7 @@ function validateProviderCredentials(provider: ProviderName): void {
* Get the AI model based on environment variables
*
* Environment variables:
* - AI_PROVIDER: The provider to use (bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek, siliconflow)
* - AI_PROVIDER: The provider to use (bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek, siliconflow, sglang, gateway)
* - AI_MODEL: The model ID/name for the selected provider
*
* Provider-specific env vars:
@@ -442,6 +452,8 @@ function validateProviderCredentials(provider: ProviderName): void {
* - DEEPSEEK_BASE_URL: DeepSeek endpoint (optional)
* - SILICONFLOW_API_KEY: SiliconFlow API key
* - SILICONFLOW_BASE_URL: SiliconFlow endpoint (optional, defaults to https://api.siliconflow.com/v1)
* - SGLANG_API_KEY: SGLang API key
* - SGLANG_BASE_URL: SGLang endpoint (optional)
*/
export function getAIModel(overrides?: ClientOverrides): ModelConfig {
// SECURITY: Prevent SSRF attacks (GHSA-9qf7-mprq-9qgm)
@@ -510,6 +522,7 @@ export function getAIModel(overrides?: ClientOverrides): ModelConfig {
`- OPENROUTER_API_KEY for OpenRouter\n` +
`- AZURE_API_KEY for Azure\n` +
`- SILICONFLOW_API_KEY for SiliconFlow\n` +
`- SGLANG_API_KEY for SGLang\n` +
`Or set AI_PROVIDER=ollama for local Ollama.`,
)
} else {
@@ -537,12 +550,25 @@ export function getAIModel(overrides?: ClientOverrides): ModelConfig {
switch (provider) {
case "bedrock": {
// Use credential provider chain for IAM role support (Lambda, EC2, etc.)
// Falls back to env vars (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) for local dev
const bedrockProvider = createAmazonBedrock({
region: process.env.AWS_REGION || "us-west-2",
credentialProvider: fromNodeProviderChain(),
})
// Use client-provided credentials if available, otherwise fall back to IAM/env vars
const hasClientCredentials =
overrides?.awsAccessKeyId && overrides?.awsSecretAccessKey
const bedrockRegion =
overrides?.awsRegion || process.env.AWS_REGION || "us-west-2"
const bedrockProvider = hasClientCredentials
? createAmazonBedrock({
region: bedrockRegion,
accessKeyId: overrides.awsAccessKeyId!,
secretAccessKey: overrides.awsSecretAccessKey!,
...(overrides?.awsSessionToken && {
sessionToken: overrides.awsSessionToken,
}),
})
: createAmazonBedrock({
region: bedrockRegion,
credentialProvider: fromNodeProviderChain(),
})
model = bedrockProvider(modelId)
// Add Anthropic beta options if using Claude models via Bedrock
if (modelId.includes("anthropic.claude")) {
@@ -562,12 +588,16 @@ export function getAIModel(overrides?: ClientOverrides): ModelConfig {
case "openai": {
const apiKey = overrides?.apiKey || process.env.OPENAI_API_KEY
const baseURL = overrides?.baseUrl || process.env.OPENAI_BASE_URL
if (baseURL || overrides?.apiKey) {
const customOpenAI = createOpenAI({
apiKey,
...(baseURL && { baseURL }),
})
if (baseURL) {
// Custom base URL = third-party proxy, use Chat Completions API
// for compatibility (most proxies don't support /responses endpoint)
const customOpenAI = createOpenAI({ apiKey, baseURL })
model = customOpenAI.chat(modelId)
} else if (overrides?.apiKey) {
// Custom API key but official OpenAI endpoint, use Responses API
// to support reasoning for gpt-5, o1, o3, o4 models
const customOpenAI = createOpenAI({ apiKey })
model = customOpenAI(modelId)
} else {
model = openai(modelId)
}
@@ -679,6 +709,112 @@ export function getAIModel(overrides?: ClientOverrides): ModelConfig {
break
}
case "sglang": {
const apiKey = overrides?.apiKey || process.env.SGLANG_API_KEY
const baseURL = overrides?.baseUrl || process.env.SGLANG_BASE_URL
const sglangProvider = createOpenAI({
apiKey,
baseURL,
// Add a custom fetch wrapper to intercept and fix the stream from sglang
fetch: async (url, options) => {
const response = await fetch(url, options)
if (!response.body) {
return response
}
// Create a transform stream to fix the non-compliant sglang stream
let buffer = ""
const decoder = new TextDecoder()
const transformStream = new TransformStream({
transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true })
// Process all complete messages in the buffer
let messageEndPos
while (
(messageEndPos = buffer.indexOf("\n\n")) !== -1
) {
const message = buffer.substring(
0,
messageEndPos,
)
buffer = buffer.substring(messageEndPos + 2) // Move past the '\n\n'
if (message.startsWith("data: ")) {
const jsonStr = message.substring(6).trim()
if (jsonStr === "[DONE]") {
controller.enqueue(
new TextEncoder().encode(
message + "\n\n",
),
)
continue
}
try {
const data = JSON.parse(jsonStr)
const delta = data.choices?.[0]?.delta
if (delta) {
// Fix 1: remove invalid empty role
if (delta.role === "") {
delete delta.role
}
// Fix 2: remove non-standard reasoning_content field
if ("reasoning_content" in delta) {
delete delta.reasoning_content
}
}
// Re-serialize and forward the corrected data with the correct SSE format
controller.enqueue(
new TextEncoder().encode(
`data: ${JSON.stringify(data)}\n\n`,
),
)
} catch (e) {
// If parsing fails, forward the original message to avoid breaking the stream.
controller.enqueue(
new TextEncoder().encode(
message + "\n\n",
),
)
}
} else if (message.trim() !== "") {
// Pass through other message types (e.g., 'event: ...')
controller.enqueue(
new TextEncoder().encode(
message + "\n\n",
),
)
}
}
},
flush(controller) {
// If there's anything left in the buffer, forward it.
if (buffer.trim()) {
controller.enqueue(
new TextEncoder().encode(buffer),
)
}
},
})
const transformedBody =
response.body.pipeThrough(transformStream)
// Return a new response with the transformed body
return new Response(transformedBody, {
status: response.status,
statusText: response.statusText,
headers: response.headers,
})
},
})
model = sglangProvider.chat(modelId)
break
}
case "gateway": {
// Vercel AI Gateway - unified access to multiple AI providers
// Model format: "provider/model" e.g., "openai/gpt-4o", "anthropic/claude-sonnet-4-5"
@@ -702,7 +838,7 @@ export function getAIModel(overrides?: ClientOverrides): ModelConfig {
default:
throw new Error(
`Unknown AI provider: ${provider}. Supported providers: bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek, siliconflow, gateway`,
`Unknown AI provider: ${provider}. Supported providers: bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek, siliconflow, sglang, gateway`,
)
}

37
lib/base-path.ts Normal file
View File

@@ -0,0 +1,37 @@
/**
* Get the base path for API calls and static assets
* This is used for subdirectory deployment support
*
* Example: If deployed at https://example.com/nextaidrawio, this returns "/nextaidrawio"
* For root deployment, this returns ""
*
* Set NEXT_PUBLIC_BASE_PATH environment variable to your subdirectory path (e.g., /nextaidrawio)
*/
export function getBasePath(): string {
// Read from environment variable (must start with NEXT_PUBLIC_ to be available on client)
const basePath = process.env.NEXT_PUBLIC_BASE_PATH || ""
if (basePath && !basePath.startsWith("/")) {
console.warn("NEXT_PUBLIC_BASE_PATH should start with /")
}
return basePath
}
/**
* Get full API endpoint URL
* @param endpoint - API endpoint path (e.g., "/api/chat", "/api/config")
* @returns Full API path with base path prefix
*/
export function getApiEndpoint(endpoint: string): string {
const basePath = getBasePath()
return `${basePath}${endpoint}`
}
/**
* Get full static asset URL
* @param assetPath - Asset path (e.g., "/example.png", "/chain-of-thought.txt")
* @returns Full asset path with base path prefix
*/
export function getAssetUrl(assetPath: string): string {
const basePath = getBasePath()
return `${basePath}${assetPath}`
}

297
lib/dynamo-quota-manager.ts Normal file
View File

@@ -0,0 +1,297 @@
import {
ConditionalCheckFailedException,
DynamoDBClient,
GetItemCommand,
UpdateItemCommand,
} from "@aws-sdk/client-dynamodb"
// Quota tracking is OPT-IN: only enabled if DYNAMODB_QUOTA_TABLE is explicitly set
// OSS users who don't need quota tracking can simply not set this env var
const TABLE = process.env.DYNAMODB_QUOTA_TABLE
const DYNAMODB_REGION = process.env.DYNAMODB_REGION || "ap-northeast-1"
// Timezone for daily quota reset (e.g., "Asia/Tokyo" for JST midnight reset)
// Defaults to UTC if not set
let QUOTA_TIMEZONE = process.env.QUOTA_TIMEZONE || "UTC"
// Validate timezone at module load
try {
new Intl.DateTimeFormat("en-CA", { timeZone: QUOTA_TIMEZONE }).format(
new Date(),
)
} catch {
console.warn(
`[quota] Invalid QUOTA_TIMEZONE "${QUOTA_TIMEZONE}", using UTC`,
)
QUOTA_TIMEZONE = "UTC"
}
/**
* Get today's date string in the configured timezone (YYYY-MM-DD format)
*/
function getTodayInTimezone(): string {
return new Intl.DateTimeFormat("en-CA", {
timeZone: QUOTA_TIMEZONE,
}).format(new Date())
}
// Only create client if quota is enabled
const client = TABLE ? new DynamoDBClient({ region: DYNAMODB_REGION }) : null
/**
* Check if server-side quota tracking is enabled.
* Quota is opt-in: only enabled when DYNAMODB_QUOTA_TABLE env var is set.
*/
export function isQuotaEnabled(): boolean {
return !!TABLE
}
interface QuotaLimits {
requests: number // Daily request limit
tokens: number // Daily token limit
tpm: number // Tokens per minute
}
interface QuotaCheckResult {
allowed: boolean
error?: string
type?: "request" | "token" | "tpm"
used?: number
limit?: number
}
/**
* Check all quotas and increment request count atomically.
* Uses ConditionExpression to prevent race conditions.
* Returns which limit was exceeded if any.
*/
export async function checkAndIncrementRequest(
ip: string,
limits: QuotaLimits,
): Promise<QuotaCheckResult> {
// Skip if quota tracking not enabled
if (!client || !TABLE) {
return { allowed: true }
}
const today = getTodayInTimezone()
const currentMinute = Math.floor(Date.now() / 60000).toString()
const ttl = Math.floor(Date.now() / 1000) + 7 * 24 * 60 * 60
try {
// First, try to reset counts if it's a new day (atomic day reset)
// This will succeed only if lastResetDate < today or doesn't exist
try {
await client.send(
new UpdateItemCommand({
TableName: TABLE,
Key: { PK: { S: `IP#${ip}` } },
// Reset all counts to 1/0 for the new day
UpdateExpression: `
SET lastResetDate = :today,
dailyReqCount = :one,
dailyTokenCount = :zero,
lastMinute = :minute,
tpmCount = :zero,
#ttl = :ttl
`,
// Only succeed if it's a new day (or new item)
ConditionExpression: `
attribute_not_exists(lastResetDate) OR lastResetDate < :today
`,
ExpressionAttributeNames: { "#ttl": "ttl" },
ExpressionAttributeValues: {
":today": { S: today },
":zero": { N: "0" },
":one": { N: "1" },
":minute": { S: currentMinute },
":ttl": { N: String(ttl) },
},
}),
)
// New day reset successful
return { allowed: true }
} catch (resetError: any) {
// If condition failed, it's the same day - continue to increment logic
if (!(resetError instanceof ConditionalCheckFailedException)) {
throw resetError // Re-throw unexpected errors
}
}
// Same day - increment request count with limit checks
await client.send(
new UpdateItemCommand({
TableName: TABLE,
Key: { PK: { S: `IP#${ip}` } },
// Increment request count, handle minute boundary for TPM
UpdateExpression: `
SET lastMinute = :minute,
tpmCount = if_not_exists(tpmCount, :zero),
#ttl = :ttl
ADD dailyReqCount :one
`,
// Check all limits before allowing increment
ConditionExpression: `
lastResetDate = :today AND
(attribute_not_exists(dailyReqCount) OR dailyReqCount < :reqLimit) AND
(attribute_not_exists(dailyTokenCount) OR dailyTokenCount < :tokenLimit) AND
(attribute_not_exists(lastMinute) OR lastMinute <> :minute OR
attribute_not_exists(tpmCount) OR tpmCount < :tpmLimit)
`,
ExpressionAttributeNames: { "#ttl": "ttl" },
ExpressionAttributeValues: {
":today": { S: today },
":zero": { N: "0" },
":one": { N: "1" },
":minute": { S: currentMinute },
":ttl": { N: String(ttl) },
":reqLimit": { N: String(limits.requests || 999999) },
":tokenLimit": { N: String(limits.tokens || 999999) },
":tpmLimit": { N: String(limits.tpm || 999999) },
},
}),
)
return { allowed: true }
} catch (e: any) {
// Condition failed - need to determine which limit was exceeded
if (e instanceof ConditionalCheckFailedException) {
// Get current counts to determine which limit was hit
try {
const getResult = await client.send(
new GetItemCommand({
TableName: TABLE,
Key: { PK: { S: `IP#${ip}` } },
}),
)
const item = getResult.Item
const storedDate = item?.lastResetDate?.S
const storedMinute = item?.lastMinute?.S
const isNewDay = !storedDate || storedDate < today
const dailyReqCount = isNewDay
? 0
: Number(item?.dailyReqCount?.N || 0)
const dailyTokenCount = isNewDay
? 0
: Number(item?.dailyTokenCount?.N || 0)
const tpmCount =
storedMinute !== currentMinute
? 0
: Number(item?.tpmCount?.N || 0)
// Determine which limit was exceeded
if (limits.requests > 0 && dailyReqCount >= limits.requests) {
return {
allowed: false,
type: "request",
error: "Daily request limit exceeded",
used: dailyReqCount,
limit: limits.requests,
}
}
if (limits.tokens > 0 && dailyTokenCount >= limits.tokens) {
return {
allowed: false,
type: "token",
error: "Daily token limit exceeded",
used: dailyTokenCount,
limit: limits.tokens,
}
}
if (limits.tpm > 0 && tpmCount >= limits.tpm) {
return {
allowed: false,
type: "tpm",
error: "Rate limit exceeded (tokens per minute)",
used: tpmCount,
limit: limits.tpm,
}
}
// Condition failed but no limit clearly exceeded - race condition edge case
// Fail safe by allowing (could be a reset race)
console.warn(
`[quota] Condition failed but no limit exceeded for IP prefix: ${ip.slice(0, 8)}...`,
)
return { allowed: true }
} catch (getError: any) {
console.error(
`[quota] Failed to get quota details after condition failure, IP prefix: ${ip.slice(0, 8)}..., error: ${getError.message}`,
)
return { allowed: true } // Fail open
}
}
// Other DynamoDB errors - fail open
console.error(
`[quota] DynamoDB error (fail-open), IP prefix: ${ip.slice(0, 8)}..., error: ${e.message}`,
)
return { allowed: true }
}
}
/**
* Record token usage after response completes.
* Uses atomic operations to update both daily token count and TPM count.
* Handles minute boundaries atomically to prevent race conditions.
*/
export async function recordTokenUsage(
ip: string,
tokens: number,
): Promise<void> {
// Skip if quota tracking not enabled
if (!client || !TABLE) return
if (!Number.isFinite(tokens) || tokens <= 0) return
const currentMinute = Math.floor(Date.now() / 60000).toString()
const ttl = Math.floor(Date.now() / 1000) + 7 * 24 * 60 * 60
try {
// Try to update assuming same minute (most common case)
// Uses condition to ensure we're in the same minute
await client.send(
new UpdateItemCommand({
TableName: TABLE,
Key: { PK: { S: `IP#${ip}` } },
UpdateExpression:
"SET #ttl = :ttl ADD dailyTokenCount :tokens, tpmCount :tokens",
ConditionExpression: "lastMinute = :minute",
ExpressionAttributeNames: { "#ttl": "ttl" },
ExpressionAttributeValues: {
":minute": { S: currentMinute },
":tokens": { N: String(tokens) },
":ttl": { N: String(ttl) },
},
}),
)
} catch (e: any) {
if (e instanceof ConditionalCheckFailedException) {
// Different minute - reset TPM count and set new minute
try {
await client.send(
new UpdateItemCommand({
TableName: TABLE,
Key: { PK: { S: `IP#${ip}` } },
UpdateExpression:
"SET lastMinute = :minute, tpmCount = :tokens, #ttl = :ttl ADD dailyTokenCount :tokens",
ExpressionAttributeNames: { "#ttl": "ttl" },
ExpressionAttributeValues: {
":minute": { S: currentMinute },
":tokens": { N: String(tokens) },
":ttl": { N: String(ttl) },
},
}),
)
} catch (retryError: any) {
console.error(
`[quota] Failed to record tokens (retry), IP prefix: ${ip.slice(0, 8)}..., tokens: ${tokens}, error: ${retryError.message}`,
)
}
} else {
console.error(
`[quota] Failed to record tokens, IP prefix: ${ip.slice(0, 8)}..., tokens: ${tokens}, error: ${e.message}`,
)
}
}
}

View File

@@ -14,6 +14,7 @@
"about": "About",
"editor": "Editor",
"newChat": "Start fresh chat",
"github": "GitHub",
"settings": "Settings",
"hidePanel": "Hide chat panel (Ctrl+B)",
"showPanel": "Show chat panel (Ctrl+B)",
@@ -87,6 +88,8 @@
"overrides": "Overrides",
"clearSettings": "Clear Settings",
"useServerDefault": "Use Server Default",
"language": "Language",
"languageDescription": "Choose your interface language.",
"theme": "Theme",
"themeDescription": "Dark/Light mode for interface and DrawIO canvas.",
"drawioStyle": "DrawIO Style",
@@ -147,6 +150,7 @@
"tokenLimit": "Daily Token Limit Reached",
"tpmLimit": "Rate Limit",
"tpmMessage": "Too many requests. Please wait a moment.",
"tpmMessageDetailed": "Rate limit reached ({limit} tokens/min). Please wait {seconds} seconds before sending another request.",
"messageApi": "Oops — you've reached the daily API limit for this demo! As an indie developer covering all the API costs myself, I have to set these limits to keep things sustainable.",
"messageToken": "Oops — you've reached the daily token limit for this demo! As an indie developer covering all the API costs myself, I have to set these limits to keep things sustainable.",
"tip": "<strong>Tip:</strong> You can use your own API key (click the Settings icon) or self-host the project to bypass these limits.",
@@ -180,5 +184,65 @@
"seekingSponsorship": "Call for Sponsorship",
"contactMe": "Contact Me",
"usageNotice": "Due to high usage, I have changed the model from Claude to minimax-m2 and added some usage limits. See About page for details."
},
"modelConfig": {
"title": "AI Model Configuration",
"description": "Configure multiple AI providers and models",
"configure": "Configure",
"addProvider": "Add Provider",
"addModel": "Add Model",
"modelId": "Model ID",
"modelLabel": "Display Label",
"streaming": "Enable Streaming",
"deleteProvider": "Delete Provider",
"deleteModel": "Delete Model",
"noModels": "No models configured. Add a model to get started.",
"selectProvider": "Select a provider or add a new one",
"configureMultiple": "Configure multiple AI providers and switch between them easily",
"apiKeyStored": "API keys are stored locally in your browser",
"test": "Test",
"validationError": "Validation failed",
"addModelFirst": "Add at least one model to validate",
"providers": "Providers",
"addProviderHint": "Add a provider to get started",
"verified": "Verified",
"configuration": "Configuration",
"displayName": "Display Name",
"awsAccessKeyId": "AWS Access Key ID",
"awsSecretAccessKey": "AWS Secret Access Key",
"awsRegion": "AWS Region",
"selectRegion": "Select region",
"apiKey": "API Key",
"enterApiKey": "Enter your API key",
"enterSecretKey": "Enter your secret access key",
"baseUrl": "Base URL",
"optional": "(optional)",
"customEndpoint": "Custom endpoint URL",
"models": "Models",
"customModelId": "Custom model ID...",
"allAdded": "All added",
"suggested": "Suggested",
"noModelsConfigured": "No models configured",
"modelIdEmpty": "Model ID cannot be empty",
"modelIdExists": "This model ID already exists",
"configureProviders": "Configure AI Providers",
"selectProviderHint": "Select a provider from the list or add a new one to configure API keys and models",
"deleteConfirmDesc": "Are you sure you want to delete {name}? This will remove all configured models and cannot be undone.",
"typeToConfirm": "Type \"{name}\" to confirm",
"typeProviderName": "Type provider name...",
"modelsConfiguredCount": "{count} model(s) configured",
"validationFailedCount": "{count} model(s) failed validation",
"cancel": "Cancel",
"delete": "Delete",
"clickToChange": "(click to change)",
"usingServerDefault": "Using server default model",
"selectModel": "Select Model",
"searchModels": "Search models...",
"noVerifiedModels": "No verified models. Test your models first.",
"noModelsFound": "No models found.",
"default": "Default",
"serverDefault": "Server Default",
"configureModels": "Configure Models...",
"onlyVerifiedShown": "Only verified models are shown"
}
}

View File

@@ -14,6 +14,7 @@
"about": "概要",
"editor": "エディタ",
"newChat": "新しいチャットを開始",
"github": "GitHub",
"settings": "設定",
"hidePanel": "チャットパネルを非表示 (Ctrl+B)",
"showPanel": "チャットパネルを表示 (Ctrl+B)",
@@ -87,6 +88,8 @@
"overrides": "上書き",
"clearSettings": "設定をクリア",
"useServerDefault": "サーバーデフォルトを使用",
"language": "言語",
"languageDescription": "インターフェース言語を選択します。",
"theme": "テーマ",
"themeDescription": "インターフェースと DrawIO キャンバスのダーク/ライトモード。",
"drawioStyle": "DrawIO スタイル",
@@ -147,6 +150,7 @@
"tokenLimit": "1日のトークン制限に達しました",
"tpmLimit": "レート制限",
"tpmMessage": "リクエストが多すぎます。しばらくお待ちください。",
"tpmMessageDetailed": "レート制限に達しました({limit}トークン/分)。{seconds}秒待ってからもう一度リクエストしてください。",
"messageApi": "おっと — このデモの1日の API 制限に達しました!個人開発者として API コストをすべて負担しているため、持続可能性を保つためにこれらの制限を設定する必要があります。",
"messageToken": "おっと — このデモの1日のトークン制限に達しました個人開発者として API コストをすべて負担しているため、持続可能性を保つためにこれらの制限を設定する必要があります。",
"tip": "<strong>ヒント:</strong>独自の API キーを使用する(設定アイコンをクリック)か、プロジェクトをセルフホストしてこれらの制限を回避できます。",
@@ -180,5 +184,65 @@
"seekingSponsorship": "スポンサー募集",
"contactMe": "お問い合わせ",
"usageNotice": "利用量の増加に伴い、コスト削減のためモデルを Claude から minimax-m2 に変更し、いくつかの利用制限を設けました。詳細は概要ページをご覧ください。"
},
"modelConfig": {
"title": "AIモデル設定",
"description": "複数のAIプロバイダーとモデルを設定",
"configure": "設定",
"addProvider": "プロバイダーを追加",
"addModel": "モデルを追加",
"modelId": "モデルID",
"modelLabel": "表示名",
"streaming": "ストリーミングを有効",
"deleteProvider": "プロバイダーを削除",
"deleteModel": "モデルを削除",
"noModels": "モデルが設定されていません。モデルを追加してください。",
"selectProvider": "プロバイダーを選択または追加してください",
"configureMultiple": "複数のAIプロバイダーを設定して簡単に切り替え",
"apiKeyStored": "APIキーはブラウザにローカル保存されます",
"test": "テスト",
"validationError": "検証に失敗しました",
"addModelFirst": "検証するには少なくとも1つのモデルを追加してください",
"providers": "プロバイダー",
"addProviderHint": "プロバイダーを追加して開始",
"verified": "検証済み",
"configuration": "設定",
"displayName": "表示名",
"awsAccessKeyId": "AWS アクセスキー ID",
"awsSecretAccessKey": "AWS シークレットアクセスキー",
"awsRegion": "AWS リージョン",
"selectRegion": "リージョンを選択",
"apiKey": "API キー",
"enterApiKey": "API キーを入力",
"enterSecretKey": "シークレットアクセスキーを入力",
"baseUrl": "ベース URL",
"optional": "(オプション)",
"customEndpoint": "カスタムエンドポイント URL",
"models": "モデル",
"customModelId": "カスタムモデル ID...",
"allAdded": "すべて追加済み",
"suggested": "おすすめ",
"noModelsConfigured": "モデルが設定されていません",
"modelIdEmpty": "モデル ID は空にできません",
"modelIdExists": "このモデル ID は既に存在します",
"configureProviders": "AI プロバイダーを設定",
"selectProviderHint": "リストからプロバイダーを選択するか、新規追加して API キーとモデルを設定",
"deleteConfirmDesc": "{name} を削除してもよろしいですか?設定されたすべてのモデルが削除され、元に戻せません。",
"typeToConfirm": "確認のため「{name}」と入力",
"typeProviderName": "プロバイダー名を入力...",
"modelsConfiguredCount": "{count} 個のモデルを設定済み",
"validationFailedCount": "{count} 個のモデルの検証に失敗",
"cancel": "キャンセル",
"delete": "削除",
"clickToChange": "(クリックして変更)",
"usingServerDefault": "サーバーデフォルトモデルを使用中",
"selectModel": "モデルを選択",
"searchModels": "モデルを検索...",
"noVerifiedModels": "検証済みのモデルがありません。先にモデルをテストしてください。",
"noModelsFound": "モデルが見つかりません。",
"default": "デフォルト",
"serverDefault": "サーバーデフォルト",
"configureModels": "モデルを設定...",
"onlyVerifiedShown": "検証済みのモデルのみ表示"
}
}

View File

@@ -14,6 +14,7 @@
"about": "关于",
"editor": "编辑器",
"newChat": "开始新对话",
"github": "GitHub",
"settings": "设置",
"hidePanel": "隐藏聊天面板 (Ctrl+B)",
"showPanel": "显示聊天面板 (Ctrl+B)",
@@ -87,6 +88,8 @@
"overrides": "覆盖",
"clearSettings": "清除设置",
"useServerDefault": "使用服务器默认值",
"language": "语言",
"languageDescription": "选择界面语言。",
"theme": "主题",
"themeDescription": "界面和 DrawIO 画布的深色/浅色模式。",
"drawioStyle": "DrawIO 样式",
@@ -147,6 +150,7 @@
"tokenLimit": "已达每日令牌限制",
"tpmLimit": "速率限制",
"tpmMessage": "请求过多。请稍等片刻。",
"tpmMessageDetailed": "达到速率限制({limit} 令牌/分钟)。请等待 {seconds} 秒后再发送请求。",
"messageApi": "糟糕 — 您已达到此演示的每日 API 限制!作为一名独立开发者,我自己承担所有 API 费用,因此必须设置这些限制以保持可持续性。",
"messageToken": "糟糕 — 您已达到此演示的每日令牌限制!作为一名独立开发者,我自己承担所有 API 费用,因此必须设置这些限制以保持可持续性。",
"tip": "<strong>提示:</strong>您可以使用自己的 API 密钥(点击设置图标)或自托管项目来绕过这些限制。",
@@ -180,5 +184,65 @@
"seekingSponsorship": "寻求赞助(求大佬捞一把)",
"contactMe": "联系我",
"usageNotice": "由于使用量过高,我已将模型从 Claude 更换为 minimax-m2并设置了一些用量限制。详情请查看关于页面。"
},
"modelConfig": {
"title": "AI 模型配置",
"description": "配置多个 AI 提供商和模型",
"configure": "配置",
"addProvider": "添加提供商",
"addModel": "添加模型",
"modelId": "模型 ID",
"modelLabel": "显示名称",
"streaming": "启用流式输出",
"deleteProvider": "删除提供商",
"deleteModel": "删除模型",
"noModels": "尚未配置模型。添加模型以开始使用。",
"selectProvider": "选择一个提供商或添加新的",
"configureMultiple": "配置多个 AI 提供商并轻松切换",
"apiKeyStored": "API 密钥存储在您的浏览器本地",
"test": "测试",
"validationError": "验证失败",
"addModelFirst": "请先添加至少一个模型以进行验证",
"providers": "提供商",
"addProviderHint": "添加提供商即可开始使用",
"verified": "已验证",
"configuration": "配置",
"displayName": "显示名称",
"awsAccessKeyId": "AWS 访问密钥 ID",
"awsSecretAccessKey": "AWS Secret Access Key",
"awsRegion": "AWS 区域",
"selectRegion": "选择区域",
"apiKey": "API 密钥",
"enterApiKey": "输入您的 API 密钥",
"enterSecretKey": "输入您的 Secret Key",
"baseUrl": "基础 URL",
"optional": "(可选)",
"customEndpoint": "自定义端点 URL",
"models": "模型",
"customModelId": "自定义模型 ID...",
"allAdded": "已全部添加",
"suggested": "推荐",
"noModelsConfigured": "尚未配置模型",
"modelIdEmpty": "模型 ID 不能为空",
"modelIdExists": "此模型 ID 已存在",
"configureProviders": "配置 AI 提供商",
"selectProviderHint": "从列表中选择提供商或添加新的以配置 API 密钥和模型",
"deleteConfirmDesc": "确定要删除 {name} 吗?这将移除所有配置的模型且无法撤销。",
"typeToConfirm": "输入 \"{name}\" 以确认",
"typeProviderName": "输入提供商名称...",
"modelsConfiguredCount": "已配置 {count} 个模型",
"validationFailedCount": "{count} 个模型验证失败",
"cancel": "取消",
"delete": "删除",
"clickToChange": "(点击更改)",
"usingServerDefault": "使用服务器默认模型",
"selectModel": "选择模型",
"searchModels": "搜索模型...",
"noVerifiedModels": "没有已验证的模型。请先测试您的模型。",
"noModelsFound": "未找到模型。",
"default": "默认",
"serverDefault": "服务器默认",
"configureModels": "配置模型...",
"onlyVerifiedShown": "仅显示已验证的模型"
}
}

View File

@@ -21,9 +21,11 @@ export function getLangfuseClient(): LangfuseClient | null {
return langfuseClient
}
// Check if Langfuse is configured
// Check if Langfuse is configured (both keys required)
export function isLangfuseEnabled(): boolean {
return !!process.env.LANGFUSE_PUBLIC_KEY
return !!(
process.env.LANGFUSE_PUBLIC_KEY && process.env.LANGFUSE_SECRET_KEY
)
}
// Update trace with input data at the start of request
@@ -43,34 +45,16 @@ export function setTraceInput(params: {
}
// Update trace with output and end the span
export function setTraceOutput(
output: string,
usage?: { promptTokens?: number; completionTokens?: number },
) {
// Note: AI SDK 6 telemetry automatically reports token usage on its spans,
// so we only need to set the output text and close our wrapper span
export function setTraceOutput(output: string) {
if (!isLangfuseEnabled()) return
updateActiveTrace({ output })
// End the observe() wrapper span (AI SDK creates its own child spans with usage)
const activeSpan = api.trace.getActiveSpan()
if (activeSpan) {
// Manually set usage attributes since AI SDK Bedrock streaming doesn't provide them
if (usage?.promptTokens) {
activeSpan.setAttribute("ai.usage.promptTokens", usage.promptTokens)
activeSpan.setAttribute(
"gen_ai.usage.input_tokens",
usage.promptTokens,
)
}
if (usage?.completionTokens) {
activeSpan.setAttribute(
"ai.usage.completionTokens",
usage.completionTokens,
)
activeSpan.setAttribute(
"gen_ai.usage.output_tokens",
usage.completionTokens,
)
}
activeSpan.end()
}
}

View File

@@ -24,4 +24,8 @@ export const STORAGE_KEYS = {
aiBaseUrl: "next-ai-draw-io-ai-base-url",
aiApiKey: "next-ai-draw-io-ai-api-key",
aiModel: "next-ai-draw-io-ai-model",
// Multi-model configuration
modelConfigs: "next-ai-draw-io-model-configs",
selectedModelId: "next-ai-draw-io-selected-model-id",
} as const

View File

@@ -99,9 +99,9 @@ When using edit_diagram tool:
- For update/add: provide cell_id and complete new_xml (full mxCell element including mxGeometry)
- For delete: only cell_id is needed
- Find the cell_id from "Current diagram XML" in system context
- Example update: {"operations": [{"type": "update", "cell_id": "3", "new_xml": "<mxCell id=\\"3\\" value=\\"New Label\\" style=\\"rounded=1;\\" vertex=\\"1\\" parent=\\"1\\">\\n <mxGeometry x=\\"100\\" y=\\"100\\" width=\\"120\\" height=\\"60\\" as=\\"geometry\\"/>\\n</mxCell>"}]}
- Example delete: {"operations": [{"type": "delete", "cell_id": "5"}]}
- Example add: {"operations": [{"type": "add", "cell_id": "new1", "new_xml": "<mxCell id=\\"new1\\" value=\\"New Box\\" style=\\"rounded=1;\\" vertex=\\"1\\" parent=\\"1\\">\\n <mxGeometry x=\\"400\\" y=\\"200\\" width=\\"120\\" height=\\"60\\" as=\\"geometry\\"/>\\n</mxCell>"}]}
- Example update: {"operations": [{"operation": "update", "cell_id": "3", "new_xml": "<mxCell id=\\"3\\" value=\\"New Label\\" style=\\"rounded=1;\\" vertex=\\"1\\" parent=\\"1\\">\\n <mxGeometry x=\\"100\\" y=\\"100\\" width=\\"120\\" height=\\"60\\" as=\\"geometry\\"/>\\n</mxCell>"}]}
- Example delete: {"operations": [{"operation": "delete", "cell_id": "5"}]}
- Example add: {"operations": [{"operation": "add", "cell_id": "new1", "new_xml": "<mxCell id=\\"new1\\" value=\\"New Box\\" style=\\"rounded=1;\\" vertex=\\"1\\" parent=\\"1\\">\\n <mxGeometry x=\\"400\\" y=\\"200\\" width=\\"120\\" height=\\"60\\" as=\\"geometry\\"/>\\n</mxCell>"}]}
⚠️ JSON ESCAPING: Every " inside new_xml MUST be escaped as \\". Example: id=\\"5\\" value=\\"Label\\"
@@ -282,9 +282,9 @@ edit_diagram uses ID-based operations to modify cells directly by their id attri
\`\`\`json
{
"operations": [
{"type": "update", "cell_id": "3", "new_xml": "<mxCell ...complete element...>"},
{"type": "add", "cell_id": "new1", "new_xml": "<mxCell ...new element...>"},
{"type": "delete", "cell_id": "5"}
{"operation": "update", "cell_id": "3", "new_xml": "<mxCell ...complete element...>"},
{"operation": "add", "cell_id": "new1", "new_xml": "<mxCell ...new element...>"},
{"operation": "delete", "cell_id": "5"}
]
}
\`\`\`
@@ -293,17 +293,17 @@ edit_diagram uses ID-based operations to modify cells directly by their id attri
Change label:
\`\`\`json
{"operations": [{"type": "update", "cell_id": "3", "new_xml": "<mxCell id=\\"3\\" value=\\"New Label\\" style=\\"rounded=1;\\" vertex=\\"1\\" parent=\\"1\\">\\n <mxGeometry x=\\"100\\" y=\\"100\\" width=\\"120\\" height=\\"60\\" as=\\"geometry\\"/>\\n</mxCell>"}]}
{"operations": [{"operation": "update", "cell_id": "3", "new_xml": "<mxCell id=\\"3\\" value=\\"New Label\\" style=\\"rounded=1;\\" vertex=\\"1\\" parent=\\"1\\">\\n <mxGeometry x=\\"100\\" y=\\"100\\" width=\\"120\\" height=\\"60\\" as=\\"geometry\\"/>\\n</mxCell>"}]}
\`\`\`
Add new shape:
\`\`\`json
{"operations": [{"type": "add", "cell_id": "new1", "new_xml": "<mxCell id=\\"new1\\" value=\\"New Box\\" style=\\"rounded=1;fillColor=#dae8fc;\\" vertex=\\"1\\" parent=\\"1\\">\\n <mxGeometry x=\\"400\\" y=\\"200\\" width=\\"120\\" height=\\"60\\" as=\\"geometry\\"/>\\n</mxCell>"}]}
{"operations": [{"operation": "add", "cell_id": "new1", "new_xml": "<mxCell id=\\"new1\\" value=\\"New Box\\" style=\\"rounded=1;fillColor=#dae8fc;\\" vertex=\\"1\\" parent=\\"1\\">\\n <mxGeometry x=\\"400\\" y=\\"200\\" width=\\"120\\" height=\\"60\\" as=\\"geometry\\"/>\\n</mxCell>"}]}
\`\`\`
Delete cell:
\`\`\`json
{"operations": [{"type": "delete", "cell_id": "5"}]}
{"operations": [{"operation": "delete", "cell_id": "5"}]}
\`\`\`
**Error Recovery:**

277
lib/types/model-config.ts Normal file
View File

@@ -0,0 +1,277 @@
// Types for multi-provider model configuration
export type ProviderName =
| "openai"
| "anthropic"
| "google"
| "azure"
| "bedrock"
| "openrouter"
| "deepseek"
| "siliconflow"
| "gateway"
// Individual model configuration
export interface ModelConfig {
id: string // UUID for this model
modelId: string // e.g., "gpt-4o", "claude-sonnet-4-5"
validated?: boolean // Has this model been validated
validationError?: string // Error message if validation failed
}
// Provider configuration
export interface ProviderConfig {
id: string // UUID for this provider config
provider: ProviderName
name?: string // Custom display name (e.g., "OpenAI Production")
apiKey: string
baseUrl?: string
// AWS Bedrock specific fields
awsAccessKeyId?: string
awsSecretAccessKey?: string
awsRegion?: string
awsSessionToken?: string // Optional, for temporary credentials
models: ModelConfig[]
validated?: boolean // Has API key been validated
}
// The complete multi-model configuration
export interface MultiModelConfig {
version: 1
providers: ProviderConfig[]
selectedModelId?: string // Currently selected model's UUID
}
// Flattened model for dropdown display
export interface FlattenedModel {
id: string // Model config UUID
modelId: string // Actual model ID
provider: ProviderName
providerLabel: string // Provider display name
apiKey: string
baseUrl?: string
// AWS Bedrock specific fields
awsAccessKeyId?: string
awsSecretAccessKey?: string
awsRegion?: string
awsSessionToken?: string
validated?: boolean // Has this model been validated
}
// Provider metadata
export const PROVIDER_INFO: Record<
ProviderName,
{ label: string; defaultBaseUrl?: string }
> = {
openai: { label: "OpenAI" },
anthropic: {
label: "Anthropic",
defaultBaseUrl: "https://api.anthropic.com/v1",
},
google: { label: "Google" },
azure: { label: "Azure OpenAI" },
bedrock: { label: "Amazon Bedrock" },
openrouter: { label: "OpenRouter" },
deepseek: { label: "DeepSeek" },
siliconflow: {
label: "SiliconFlow",
defaultBaseUrl: "https://api.siliconflow.com/v1",
},
gateway: { label: "AI Gateway" },
}
// Suggested models per provider for quick add
export const SUGGESTED_MODELS: Record<ProviderName, string[]> = {
openai: [
// GPT-4o series (latest)
"gpt-4o",
"gpt-4o-mini",
"gpt-4o-2024-11-20",
// GPT-4 Turbo
"gpt-4-turbo",
"gpt-4-turbo-preview",
// o1/o3 reasoning models
"o1",
"o1-mini",
"o1-preview",
"o3-mini",
// GPT-4
"gpt-4",
// GPT-3.5
"gpt-3.5-turbo",
],
anthropic: [
// Claude 4.5 series (latest)
"claude-opus-4-5-20250514",
"claude-sonnet-4-5-20250514",
// Claude 4 series
"claude-opus-4-20250514",
"claude-sonnet-4-20250514",
// Claude 3.7 series
"claude-3-7-sonnet-20250219",
// Claude 3.5 series
"claude-3-5-sonnet-20241022",
"claude-3-5-haiku-20241022",
// Claude 3 series
"claude-3-opus-20240229",
"claude-3-sonnet-20240229",
"claude-3-haiku-20240307",
],
google: [
// Gemini 2.5 series
"gemini-2.5-pro",
"gemini-2.5-flash",
"gemini-2.5-flash-preview-05-20",
// Gemini 2.0 series
"gemini-2.0-flash",
"gemini-2.0-flash-exp",
"gemini-2.0-flash-lite",
// Gemini 1.5 series
"gemini-1.5-pro",
"gemini-1.5-flash",
// Legacy
"gemini-pro",
],
azure: ["gpt-4o", "gpt-4o-mini", "gpt-4-turbo", "gpt-4", "gpt-35-turbo"],
bedrock: [
// Anthropic Claude
"anthropic.claude-opus-4-5-20250514-v1:0",
"anthropic.claude-sonnet-4-5-20250514-v1:0",
"anthropic.claude-opus-4-20250514-v1:0",
"anthropic.claude-sonnet-4-20250514-v1:0",
"anthropic.claude-3-7-sonnet-20250219-v1:0",
"anthropic.claude-3-5-sonnet-20241022-v2:0",
"anthropic.claude-3-5-haiku-20241022-v1:0",
"anthropic.claude-3-opus-20240229-v1:0",
"anthropic.claude-3-sonnet-20240229-v1:0",
"anthropic.claude-3-haiku-20240307-v1:0",
// Amazon Nova
"amazon.nova-pro-v1:0",
"amazon.nova-lite-v1:0",
"amazon.nova-micro-v1:0",
// Meta Llama
"meta.llama3-3-70b-instruct-v1:0",
"meta.llama3-1-405b-instruct-v1:0",
"meta.llama3-1-70b-instruct-v1:0",
// Mistral
"mistral.mistral-large-2411-v1:0",
"mistral.mistral-small-2503-v1:0",
],
openrouter: [
// Anthropic
"anthropic/claude-sonnet-4",
"anthropic/claude-opus-4",
"anthropic/claude-3.5-sonnet",
"anthropic/claude-3.5-haiku",
// OpenAI
"openai/gpt-4o",
"openai/gpt-4o-mini",
"openai/o1",
"openai/o3-mini",
// Google
"google/gemini-2.5-pro",
"google/gemini-2.5-flash",
"google/gemini-2.0-flash-exp:free",
// Meta Llama
"meta-llama/llama-3.3-70b-instruct",
"meta-llama/llama-3.1-405b-instruct",
"meta-llama/llama-3.1-70b-instruct",
// DeepSeek
"deepseek/deepseek-chat",
"deepseek/deepseek-r1",
// Qwen
"qwen/qwen-2.5-72b-instruct",
],
deepseek: ["deepseek-chat", "deepseek-reasoner", "deepseek-coder"],
siliconflow: [
// DeepSeek
"deepseek-ai/DeepSeek-V3",
"deepseek-ai/DeepSeek-R1",
"deepseek-ai/DeepSeek-V2.5",
// Qwen
"Qwen/Qwen2.5-72B-Instruct",
"Qwen/Qwen2.5-32B-Instruct",
"Qwen/Qwen2.5-Coder-32B-Instruct",
"Qwen/Qwen2.5-7B-Instruct",
"Qwen/Qwen2-VL-72B-Instruct",
],
gateway: [
"openai/gpt-4o",
"openai/gpt-4o-mini",
"anthropic/claude-sonnet-4-5",
"anthropic/claude-3-5-sonnet",
"google/gemini-2.0-flash",
],
}
// Helper to generate UUID
export function generateId(): string {
return `${Date.now()}-${Math.random().toString(36).slice(2, 9)}`
}
// Create empty config
export function createEmptyConfig(): MultiModelConfig {
return {
version: 1,
providers: [],
selectedModelId: undefined,
}
}
// Create new provider config
export function createProviderConfig(provider: ProviderName): ProviderConfig {
return {
id: generateId(),
provider,
apiKey: "",
baseUrl: PROVIDER_INFO[provider].defaultBaseUrl,
models: [],
validated: false,
}
}
// Create new model config
export function createModelConfig(modelId: string): ModelConfig {
return {
id: generateId(),
modelId,
}
}
// Get all models as flattened list for dropdown
export function flattenModels(config: MultiModelConfig): FlattenedModel[] {
const models: FlattenedModel[] = []
for (const provider of config.providers) {
// Use custom name if provided, otherwise use default provider label
const providerLabel =
provider.name || PROVIDER_INFO[provider.provider].label
for (const model of provider.models) {
models.push({
id: model.id,
modelId: model.modelId,
provider: provider.provider,
providerLabel,
apiKey: provider.apiKey,
baseUrl: provider.baseUrl,
// AWS Bedrock fields
awsAccessKeyId: provider.awsAccessKeyId,
awsSecretAccessKey: provider.awsSecretAccessKey,
awsRegion: provider.awsRegion,
awsSessionToken: provider.awsSessionToken,
validated: model.validated,
})
}
}
return models
}
// Find model by ID
export function findModelById(
config: MultiModelConfig,
modelId: string,
): FlattenedModel | undefined {
return flattenModels(config).find((m) => m.id === modelId)
}

View File

@@ -1,9 +1,10 @@
"use client"
import { useCallback, useMemo } from "react"
import { useCallback } from "react"
import { toast } from "sonner"
import { QuotaLimitToast } from "@/components/quota-limit-toast"
import { STORAGE_KEYS } from "@/lib/storage"
import { useDictionary } from "@/hooks/use-dictionary"
import { formatMessage } from "@/lib/i18n/utils"
export interface QuotaConfig {
dailyRequestLimit: number
@@ -11,179 +12,45 @@ export interface QuotaConfig {
tpmLimit: number
}
export interface QuotaCheckResult {
allowed: boolean
remaining: number
used: number
}
/**
* Hook for managing request/token quotas and rate limiting.
* Handles three types of limits:
* - Daily request limit
* - Daily token limit
* - Tokens per minute (TPM) rate limit
*
* Users with their own API key bypass all limits.
* Hook for displaying quota limit toasts.
* Server-side handles actual quota enforcement via DynamoDB.
* This hook only provides UI feedback when limits are exceeded.
*/
export function useQuotaManager(config: QuotaConfig): {
hasOwnApiKey: () => boolean
checkDailyLimit: () => QuotaCheckResult
checkTokenLimit: () => QuotaCheckResult
checkTPMLimit: () => QuotaCheckResult
incrementRequestCount: () => void
incrementTokenCount: (tokens: number) => void
incrementTPMCount: (tokens: number) => void
showQuotaLimitToast: () => void
showTokenLimitToast: (used: number) => void
showTPMLimitToast: () => void
showQuotaLimitToast: (used?: number, limit?: number) => void
showTokenLimitToast: (used?: number, limit?: number) => void
showTPMLimitToast: (limit?: number) => void
} {
const { dailyRequestLimit, dailyTokenLimit, tpmLimit } = config
// Check if user has their own API key configured (bypass limits)
const hasOwnApiKey = useCallback((): boolean => {
const provider = localStorage.getItem(STORAGE_KEYS.aiProvider)
const apiKey = localStorage.getItem(STORAGE_KEYS.aiApiKey)
return !!(provider && apiKey)
}, [])
// Generic helper: Parse count from localStorage with NaN guard
const parseStorageCount = (key: string): number => {
const count = parseInt(localStorage.getItem(key) || "0", 10)
return Number.isNaN(count) ? 0 : count
}
// Generic helper: Create quota checker factory
const createQuotaChecker = useCallback(
(
getTimeKey: () => string,
timeStorageKey: string,
countStorageKey: string,
limit: number,
) => {
return (): QuotaCheckResult => {
if (hasOwnApiKey())
return { allowed: true, remaining: -1, used: 0 }
if (limit <= 0) return { allowed: true, remaining: -1, used: 0 }
const currentTime = getTimeKey()
const storedTime = localStorage.getItem(timeStorageKey)
let count = parseStorageCount(countStorageKey)
if (storedTime !== currentTime) {
count = 0
localStorage.setItem(timeStorageKey, currentTime)
localStorage.setItem(countStorageKey, "0")
}
return {
allowed: count < limit,
remaining: limit - count,
used: count,
}
}
},
[hasOwnApiKey],
)
// Generic helper: Create quota incrementer factory
const createQuotaIncrementer = useCallback(
(
getTimeKey: () => string,
timeStorageKey: string,
countStorageKey: string,
validateInput: boolean = false,
) => {
return (tokens: number = 1): void => {
if (validateInput && (!Number.isFinite(tokens) || tokens <= 0))
return
const currentTime = getTimeKey()
const storedTime = localStorage.getItem(timeStorageKey)
let count = parseStorageCount(countStorageKey)
if (storedTime !== currentTime) {
count = 0
localStorage.setItem(timeStorageKey, currentTime)
}
localStorage.setItem(countStorageKey, String(count + tokens))
}
},
[],
)
// Check daily request limit
const checkDailyLimit = useMemo(
() =>
createQuotaChecker(
() => new Date().toDateString(),
STORAGE_KEYS.requestDate,
STORAGE_KEYS.requestCount,
dailyRequestLimit,
),
[createQuotaChecker, dailyRequestLimit],
)
// Increment request count
const incrementRequestCount = useMemo(
() =>
createQuotaIncrementer(
() => new Date().toDateString(),
STORAGE_KEYS.requestDate,
STORAGE_KEYS.requestCount,
false,
),
[createQuotaIncrementer],
)
const dict = useDictionary()
// Show quota limit toast (request-based)
const showQuotaLimitToast = useCallback(() => {
toast.custom(
(t) => (
<QuotaLimitToast
used={dailyRequestLimit}
limit={dailyRequestLimit}
onDismiss={() => toast.dismiss(t)}
/>
),
{ duration: 15000 },
)
}, [dailyRequestLimit])
// Check daily token limit
const checkTokenLimit = useMemo(
() =>
createQuotaChecker(
() => new Date().toDateString(),
STORAGE_KEYS.tokenDate,
STORAGE_KEYS.tokenCount,
dailyTokenLimit,
),
[createQuotaChecker, dailyTokenLimit],
)
// Increment token count
const incrementTokenCount = useMemo(
() =>
createQuotaIncrementer(
() => new Date().toDateString(),
STORAGE_KEYS.tokenDate,
STORAGE_KEYS.tokenCount,
true, // Validate input tokens
),
[createQuotaIncrementer],
const showQuotaLimitToast = useCallback(
(used?: number, limit?: number) => {
toast.custom(
(t) => (
<QuotaLimitToast
used={used ?? dailyRequestLimit}
limit={limit ?? dailyRequestLimit}
onDismiss={() => toast.dismiss(t)}
/>
),
{ duration: 15000 },
)
},
[dailyRequestLimit],
)
// Show token limit toast
const showTokenLimitToast = useCallback(
(used: number) => {
(used?: number, limit?: number) => {
toast.custom(
(t) => (
<QuotaLimitToast
type="token"
used={used}
limit={dailyTokenLimit}
used={used ?? dailyTokenLimit}
limit={limit ?? dailyTokenLimit}
onDismiss={() => toast.dismiss(t)}
/>
),
@@ -193,53 +60,24 @@ export function useQuotaManager(config: QuotaConfig): {
[dailyTokenLimit],
)
// Check TPM (tokens per minute) limit
const checkTPMLimit = useMemo(
() =>
createQuotaChecker(
() => Math.floor(Date.now() / 60000).toString(),
STORAGE_KEYS.tpmMinute,
STORAGE_KEYS.tpmCount,
tpmLimit,
),
[createQuotaChecker, tpmLimit],
)
// Increment TPM count
const incrementTPMCount = useMemo(
() =>
createQuotaIncrementer(
() => Math.floor(Date.now() / 60000).toString(),
STORAGE_KEYS.tpmMinute,
STORAGE_KEYS.tpmCount,
true, // Validate input tokens
),
[createQuotaIncrementer],
)
// Show TPM limit toast
const showTPMLimitToast = useCallback(() => {
const limitDisplay =
tpmLimit >= 1000 ? `${tpmLimit / 1000}k` : String(tpmLimit)
toast.error(
`Rate limit reached (${limitDisplay} tokens/min). Please wait 60 seconds before sending another request.`,
{ duration: 8000 },
)
}, [tpmLimit])
const showTPMLimitToast = useCallback(
(limit?: number) => {
const effectiveLimit = limit ?? tpmLimit
const limitDisplay =
effectiveLimit >= 1000
? `${effectiveLimit / 1000}k`
: String(effectiveLimit)
const message = formatMessage(dict.quota.tpmMessageDetailed, {
limit: limitDisplay,
seconds: 60,
})
toast.error(message, { duration: 8000 })
},
[tpmLimit, dict],
)
return {
// Check functions
hasOwnApiKey,
checkDailyLimit,
checkTokenLimit,
checkTPMLimit,
// Increment functions
incrementRequestCount,
incrementTokenCount,
incrementTPMCount,
// Toast functions
showQuotaLimitToast,
showTokenLimitToast,
showTPMLimitToast,

12
lib/user-id.ts Normal file
View File

@@ -0,0 +1,12 @@
/**
* Generate a userId from request for tracking purposes.
* Uses base64url encoding of IP for URL-safe identifier.
* Note: base64 is reversible - this is NOT privacy protection.
*/
export function getUserIdFromRequest(req: Request): string {
const forwardedFor = req.headers.get("x-forwarded-for")
const rawIp = forwardedFor?.split(",")[0]?.trim() || "anonymous"
return rawIp === "anonymous"
? rawIp
: `user-${Buffer.from(rawIp).toString("base64url")}`
}

View File

@@ -36,29 +36,73 @@ const VALID_ENTITIES = new Set(["lt", "gt", "amp", "quot", "apos"])
/**
* Check if mxCell XML output is complete (not truncated).
* Complete XML ends with a self-closing tag (/>) or closing mxCell tag.
* Also handles function-calling wrapper tags that may be incorrectly included.
* Uses a robust approach that handles any LLM provider's wrapper tags
* by finding the last valid mxCell ending and checking if suffix is just closing tags.
* @param xml - The XML string to check (can be undefined/null)
* @returns true if XML appears complete, false if truncated or empty
*/
export function isMxCellXmlComplete(xml: string | undefined | null): boolean {
let trimmed = xml?.trim() || ""
const trimmed = xml?.trim() || ""
if (!trimmed) return false
// Strip Anthropic function-calling wrapper tags if present
// These can leak into tool input due to AI SDK parsing issues
// Use loop because tags are nested: </mxCell></mxParameter></invoke>
let prev = ""
while (prev !== trimmed) {
prev = trimmed
trimmed = trimmed
.replace(/<\/mxParameter>\s*$/i, "")
.replace(/<\/invoke>\s*$/i, "")
.replace(/<\/antml:parameter>\s*$/i, "")
.replace(/<\/antml:invoke>\s*$/i, "")
.trim()
// Find position of last complete mxCell ending (either /> or </mxCell>)
const lastSelfClose = trimmed.lastIndexOf("/>")
const lastMxCellClose = trimmed.lastIndexOf("</mxCell>")
const lastValidEnd = Math.max(lastSelfClose, lastMxCellClose)
// No valid ending found at all
if (lastValidEnd === -1) return false
// Check what comes after the last valid ending
// For />: add 2 chars, for </mxCell>: add 9 chars
const endOffset = lastMxCellClose > lastSelfClose ? 9 : 2
const suffix = trimmed.slice(lastValidEnd + endOffset)
// If suffix is empty or only contains closing tags (any provider's wrapper) or whitespace, it's complete
// This regex matches any sequence of closing XML tags like </foo>, </bar>, </DSMLxyz>
return /^(\s*<\/[^>]+>)*\s*$/.test(suffix)
}
/**
* Extract only complete mxCell elements from partial/streaming XML.
* This allows progressive rendering during streaming by ignoring incomplete trailing elements.
* @param xml - The partial XML string (may contain incomplete trailing mxCell)
* @returns XML string containing only complete mxCell elements
*/
export function extractCompleteMxCells(xml: string | undefined | null): string {
if (!xml) return ""
const completeCells: Array<{ index: number; text: string }> = []
// Match self-closing mxCell tags: <mxCell ... />
// Also match mxCell with nested mxGeometry: <mxCell ...>...<mxGeometry .../></mxCell>
const selfClosingPattern = /<mxCell\s+[^>]*\/>/g
const nestedPattern = /<mxCell\s+[^>]*>[\s\S]*?<\/mxCell>/g
// Find all self-closing mxCell elements
let match: RegExpExecArray | null
while ((match = selfClosingPattern.exec(xml)) !== null) {
completeCells.push({ index: match.index, text: match[0] })
}
return trimmed.endsWith("/>") || trimmed.endsWith("</mxCell>")
// Find all mxCell elements with nested content (like mxGeometry)
while ((match = nestedPattern.exec(xml)) !== null) {
completeCells.push({ index: match.index, text: match[0] })
}
// Sort by position to maintain order
completeCells.sort((a, b) => a.index - b.index)
// Remove duplicates (a self-closing match might overlap with nested match)
const seen = new Set<number>()
const uniqueCells = completeCells.filter((cell) => {
if (seen.has(cell.index)) return false
seen.add(cell.index)
return true
})
return uniqueCells.map((c) => c.text).join("\n")
}
// ============================================================================
@@ -221,6 +265,21 @@ export function convertToLegalXml(xmlString: string): string {
"&amp;",
)
// Fix unescaped < and > in attribute values for XML parsing
// HTML content in value attributes (e.g., <b>Title</b>) needs to be escaped
// This is critical because DOMParser will fail on unescaped < > in attributes
if (/=\s*"[^"]*<[^"]*"/.test(cellContent)) {
cellContent = cellContent.replace(
/=\s*"([^"]*)"/g,
(_match, value) => {
const escaped = value
.replace(/</g, "&lt;")
.replace(/>/g, "&gt;")
return `="${escaped}"`
},
)
}
// Indent each line of the matched block for readability.
const formatted = cellContent
.split("\n")
@@ -265,6 +324,20 @@ export function wrapWithMxFile(xml: string): string {
content = xml.replace(/<\/?root>/g, "").trim()
}
// Strip trailing LLM wrapper tags (from any provider: Anthropic, DeepSeek, etc.)
// Find the last valid mxCell ending and remove everything after it
const lastSelfClose = content.lastIndexOf("/>")
const lastMxCellClose = content.lastIndexOf("</mxCell>")
const lastValidEnd = Math.max(lastSelfClose, lastMxCellClose)
if (lastValidEnd !== -1) {
const endOffset = lastMxCellClose > lastSelfClose ? 9 : 2
const suffix = content.slice(lastValidEnd + endOffset)
// If suffix is only closing tags (wrapper tags), strip it
if (/^(\s*<\/[^>]+>)*\s*$/.test(suffix)) {
content = content.slice(0, lastValidEnd + endOffset)
}
}
// Remove any existing root cells from content (LLM shouldn't include them, but handle it gracefully)
// Use flexible patterns that match both self-closing (/>) and non-self-closing (></mxCell>) formats
content = content
@@ -382,7 +455,7 @@ export function replaceNodes(currentXML: string, nodes: string): string {
// ============================================================================
export interface DiagramOperation {
type: "update" | "add" | "delete"
operation: "update" | "add" | "delete"
cell_id: string
new_xml?: string
}
@@ -455,7 +528,7 @@ export function applyDiagramOperations(
// Process each operation
for (const op of operations) {
if (op.type === "update") {
if (op.operation === "update") {
const existingCell = cellMap.get(op.cell_id)
if (!existingCell) {
errors.push({
@@ -507,7 +580,7 @@ export function applyDiagramOperations(
// Update the map with the new element
cellMap.set(op.cell_id, importedNode)
} else if (op.type === "add") {
} else if (op.operation === "add") {
// Check if ID already exists
if (cellMap.has(op.cell_id)) {
errors.push({
@@ -559,7 +632,7 @@ export function applyDiagramOperations(
// Add to map
cellMap.set(op.cell_id, importedNode)
} else if (op.type === "delete") {
} else if (op.operation === "delete") {
const existingCell = cellMap.get(op.cell_id)
if (!existingCell) {
errors.push({
@@ -869,6 +942,21 @@ export function autoFixXml(xml: string): { fixed: string; fixes: string[] } {
fixes.push("Removed CDATA wrapper")
}
// 1b. Strip trailing LLM wrapper tags (DeepSeek, Anthropic, etc.)
// These are closing tags after the last valid mxCell that break XML parsing
const lastSelfClose = fixed.lastIndexOf("/>")
const lastMxCellClose = fixed.lastIndexOf("</mxCell>")
const lastValidEnd = Math.max(lastSelfClose, lastMxCellClose)
if (lastValidEnd !== -1) {
const endOffset = lastMxCellClose > lastSelfClose ? 9 : 2
const suffix = fixed.slice(lastValidEnd + endOffset)
// If suffix contains only closing tags (wrapper tags) or whitespace, strip it
if (/^(\s*<\/[^>]+>)+\s*$/.test(suffix)) {
fixed = fixed.slice(0, lastValidEnd + endOffset)
fixes.push("Stripped trailing LLM wrapper tags")
}
}
// 2. Remove text before XML declaration or root element (only if it's garbage text, not valid XML)
const xmlStart = fixed.search(/<(\?xml|mxGraphModel|mxfile)/i)
if (xmlStart > 0 && !/^<[a-zA-Z]/.test(fixed.trim())) {
@@ -974,8 +1062,8 @@ export function autoFixXml(xml: string): { fixed: string; fixes: string[] } {
fixes.push("Removed quotes around color values in style")
}
// 4. Fix unescaped < in attribute values
// This is tricky - we need to find < inside quoted attribute values
// 4. Fix unescaped < and > in attribute values
// < is required to be escaped, > is not strictly required but we escape for consistency
const attrPattern = /(=\s*")([^"]*?)(<)([^"]*?)(")/g
let attrMatch
let hasUnescapedLt = false
@@ -986,12 +1074,12 @@ export function autoFixXml(xml: string): { fixed: string; fixes: string[] } {
}
}
if (hasUnescapedLt) {
// Replace < with &lt; inside attribute values
// Replace < and > with &lt; and &gt; inside attribute values
fixed = fixed.replace(/=\s*"([^"]*)"/g, (_match, value) => {
const escaped = value.replace(/</g, "&lt;")
const escaped = value.replace(/</g, "&lt;").replace(/>/g, "&gt;")
return `="${escaped}"`
})
fixes.push("Escaped < characters in attribute values")
fixes.push("Escaped <> characters in attribute values")
}
// 5. Fix invalid character references (remove malformed ones)
@@ -1079,7 +1167,8 @@ export function autoFixXml(xml: string): { fixed: string; fixes: string[] } {
}
// 8c. Remove non-draw.io tags (after typo fixes so lowercase variants are fixed first)
// Valid draw.io tags: mxfile, diagram, mxGraphModel, root, mxCell, mxGeometry, mxPoint, Array, Object
// IMPORTANT: Only remove tags at the element level, NOT inside quoted attribute values
// Tags like <b>, <br> inside value="<b>text</b>" should be preserved (they're HTML content)
const validDrawioTags = new Set([
"mxfile",
"diagram",
@@ -1092,25 +1181,59 @@ export function autoFixXml(xml: string): { fixed: string; fixes: string[] } {
"Object",
"mxRectangle",
])
// Helper: Check if a position is inside a quoted attribute value
// by counting unescaped quotes before that position
const isInsideQuotes = (str: string, pos: number): boolean => {
let inQuote = false
let quoteChar = ""
for (let i = 0; i < pos && i < str.length; i++) {
const c = str[i]
if (inQuote) {
if (c === quoteChar) inQuote = false
} else if (c === '"' || c === "'") {
// Check if this quote is part of an attribute (preceded by =)
// Look back for = sign
let j = i - 1
while (j >= 0 && /\s/.test(str[j])) j--
if (j >= 0 && str[j] === "=") {
inQuote = true
quoteChar = c
}
}
}
return inQuote
}
const foreignTagPattern = /<\/?([a-zA-Z][a-zA-Z0-9_]*)[^>]*>/g
let foreignMatch
const foreignTags = new Set<string>()
const foreignTagPositions: Array<{
tag: string
start: number
end: number
}> = []
while ((foreignMatch = foreignTagPattern.exec(fixed)) !== null) {
const tagName = foreignMatch[1]
if (!validDrawioTags.has(tagName)) {
foreignTags.add(tagName)
}
// Skip if this is a valid draw.io tag
if (validDrawioTags.has(tagName)) continue
// Skip if this tag is inside a quoted attribute value
if (isInsideQuotes(fixed, foreignMatch.index)) continue
foreignTags.add(tagName)
foreignTagPositions.push({
tag: tagName,
start: foreignMatch.index,
end: foreignMatch.index + foreignMatch[0].length,
})
}
if (foreignTags.size > 0) {
console.log(
"[autoFixXml] Step 8c: Found foreign tags:",
Array.from(foreignTags),
)
for (const tag of foreignTags) {
// Remove opening tags (with or without attributes)
fixed = fixed.replace(new RegExp(`<${tag}[^>]*>`, "gi"), "")
// Remove closing tags
fixed = fixed.replace(new RegExp(`</${tag}>`, "gi"), "")
if (foreignTagPositions.length > 0) {
// Remove tags from end to start to preserve indices
foreignTagPositions.sort((a, b) => b.start - a.start)
for (const { start, end } of foreignTagPositions) {
fixed = fixed.slice(0, start) + fixed.slice(end)
}
fixes.push(
`Removed foreign tags: ${Array.from(foreignTags).join(", ")}`,
@@ -1161,6 +1284,7 @@ export function autoFixXml(xml: string): { fixed: string; fixes: string[] } {
// 10b. Remove extra closing tags (more closes than opens)
// Need to properly count self-closing tags (they don't need closing tags)
// IMPORTANT: Only count tags at element level, NOT inside quoted attribute values
const tagCounts = new Map<
string,
{ opens: number; closes: number; selfClosing: number }
@@ -1169,12 +1293,18 @@ export function autoFixXml(xml: string): { fixed: string; fixes: string[] } {
const fullTagPattern = /<(\/?[a-zA-Z][a-zA-Z0-9]*)[^>]*>/g
let tagCountMatch
while ((tagCountMatch = fullTagPattern.exec(fixed)) !== null) {
// Skip tags inside quoted attribute values (e.g., value="<b>Title</b>")
if (isInsideQuotes(fixed, tagCountMatch.index)) continue
const fullMatch = tagCountMatch[0] // e.g., "<mxCell .../>" or "</mxCell>"
const tagPart = tagCountMatch[1] // e.g., "mxCell" or "/mxCell"
const isClosing = tagPart.startsWith("/")
const isSelfClosing = fullMatch.endsWith("/>")
const tagName = isClosing ? tagPart.slice(1) : tagPart
// Only count valid draw.io tags - skip partial/invalid tags like "mx" from streaming
if (!validDrawioTags.has(tagName)) continue
let counts = tagCounts.get(tagName)
if (!counts) {
counts = { opens: 0, closes: 0, selfClosing: 0 }

View File

@@ -4,9 +4,16 @@ import packageJson from "./package.json"
const nextConfig: NextConfig = {
/* config options here */
output: "standalone",
// Support for subdirectory deployment (e.g., https://example.com/nextaidrawio)
// Set NEXT_PUBLIC_BASE_PATH environment variable to your subdirectory path (e.g., /nextaidrawio)
basePath: process.env.NEXT_PUBLIC_BASE_PATH || "",
env: {
APP_VERSION: packageJson.version,
},
// Include instrumentation.ts in standalone build for Langfuse telemetry
outputFileTracingIncludes: {
"*": ["./instrumentation.ts"],
},
}
export default nextConfig

2287
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "next-ai-draw-io",
"version": "0.4.5",
"version": "0.4.6",
"license": "Apache-2.0",
"private": true,
"main": "dist-electron/main/index.js",
@@ -14,7 +14,7 @@
"prepare": "husky",
"electron:dev": "node scripts/electron-dev.mjs",
"electron:build": "npm run build && npm run electron:compile",
"electron:compile": "npx esbuild electron/main/index.ts electron/preload/index.ts electron/preload/settings.ts --bundle --platform=node --outdir=dist-electron --external:electron --sourcemap --packages=external && npx shx cp -r electron/settings dist-electron/",
"electron:compile": "npx esbuild electron/main/index.ts electron/preload/index.ts --bundle --platform=node --outdir=dist-electron --external:electron --sourcemap --packages=external",
"electron:start": "npx cross-env NODE_ENV=development npx electron .",
"electron:prepare": "node scripts/prepare-electron-build.mjs",
"dist": "npm run electron:build && npm run electron:prepare && npx electron-builder",
@@ -24,37 +24,41 @@
"dist:all": "npm run electron:build && npm run electron:prepare && npx electron-builder --mac --win --linux"
},
"dependencies": {
"@ai-sdk/amazon-bedrock": "^3.0.70",
"@ai-sdk/anthropic": "^2.0.44",
"@ai-sdk/azure": "^2.0.69",
"@ai-sdk/deepseek": "^1.0.30",
"@ai-sdk/gateway": "^2.0.21",
"@ai-sdk/google": "^2.0.0",
"@ai-sdk/openai": "^2.0.19",
"@ai-sdk/react": "^2.0.107",
"@ai-sdk/amazon-bedrock": "^4.0.1",
"@ai-sdk/anthropic": "^3.0.0",
"@ai-sdk/azure": "^3.0.0",
"@ai-sdk/deepseek": "^2.0.0",
"@ai-sdk/gateway": "^3.0.0",
"@ai-sdk/google": "^3.0.0",
"@ai-sdk/openai": "^3.0.0",
"@ai-sdk/react": "^3.0.1",
"@aws-sdk/client-dynamodb": "^3.957.0",
"@aws-sdk/credential-providers": "^3.943.0",
"@formatjs/intl-localematcher": "^0.7.2",
"@langfuse/client": "^4.4.9",
"@langfuse/otel": "^4.4.4",
"@langfuse/tracing": "^4.4.9",
"@next/third-parties": "^16.0.6",
"@openrouter/ai-sdk-provider": "^1.2.3",
"@openrouter/ai-sdk-provider": "^1.5.4",
"@opentelemetry/exporter-trace-otlp-http": "^0.208.0",
"@opentelemetry/sdk-trace-node": "^2.2.0",
"@radix-ui/react-alert-dialog": "^1.1.15",
"@radix-ui/react-collapsible": "^1.1.12",
"@radix-ui/react-dialog": "^1.1.6",
"@radix-ui/react-dialog": "^1.1.15",
"@radix-ui/react-label": "^2.1.8",
"@radix-ui/react-popover": "^1.1.15",
"@radix-ui/react-scroll-area": "^1.2.3",
"@radix-ui/react-select": "^2.2.6",
"@radix-ui/react-slot": "^1.1.2",
"@radix-ui/react-slot": "^1.2.4",
"@radix-ui/react-switch": "^1.2.6",
"@radix-ui/react-tooltip": "^1.1.8",
"@radix-ui/react-use-controllable-state": "^1.2.2",
"@xmldom/xmldom": "^0.9.8",
"ai": "^5.0.89",
"ai": "^6.0.1",
"base-64": "^1.0.0",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"cmdk": "^1.1.1",
"js-tiktoken": "^1.0.21",
"jsdom": "^26.0.0",
"jsonrepair": "^3.13.1",
@@ -108,5 +112,10 @@
"tailwindcss": "^4",
"typescript": "^5",
"wait-on": "^9.0.3"
},
"overrides": {
"@openrouter/ai-sdk-provider": {
"ai": "^6.0.1"
}
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "@next-ai-drawio/mcp-server",
"version": "0.1.5",
"version": "0.1.6",
"description": "MCP server for Next AI Draw.io - AI-powered diagram generation with real-time browser preview",
"type": "module",
"main": "dist/index.js",

View File

@@ -4,7 +4,7 @@
*/
export interface DiagramOperation {
type: "update" | "add" | "delete"
operation: "update" | "add" | "delete"
cell_id: string
new_xml?: string
}
@@ -77,7 +77,7 @@ export function applyDiagramOperations(
// Process each operation
for (const op of operations) {
if (op.type === "update") {
if (op.operation === "update") {
const existingCell = cellMap.get(op.cell_id)
if (!existingCell) {
errors.push({
@@ -129,7 +129,7 @@ export function applyDiagramOperations(
// Update the map with the new element
cellMap.set(op.cell_id, importedNode)
} else if (op.type === "add") {
} else if (op.operation === "add") {
// Check if ID already exists
if (cellMap.has(op.cell_id)) {
errors.push({
@@ -181,7 +181,7 @@ export function applyDiagramOperations(
// Add to map
cellMap.set(op.cell_id, importedNode)
} else if (op.type === "delete") {
} else if (op.operation === "delete") {
const existingCell = cellMap.get(op.cell_id)
if (!existingCell) {
errors.push({

View File

@@ -265,14 +265,22 @@ server.registerTool(
"- add: Add a new cell. Provide cell_id (new unique id) and new_xml.\n" +
"- update: Replace an existing cell by its id. Provide cell_id and complete new_xml.\n" +
"- delete: Remove a cell by its id. Only cell_id is needed.\n\n" +
"For add/update, new_xml must be a complete mxCell element including mxGeometry.",
"For add/update, new_xml must be a complete mxCell element including mxGeometry.\n\n" +
"Example - Add a rectangle:\n" +
'{"operations": [{"operation": "add", "cell_id": "rect-1", "new_xml": "<mxCell id=\\"rect-1\\" value=\\"Hello\\" style=\\"rounded=0;\\" vertex=\\"1\\" parent=\\"1\\"><mxGeometry x=\\"100\\" y=\\"100\\" width=\\"120\\" height=\\"60\\" as=\\"geometry\\"/></mxCell>"}]}\n\n' +
"Example - Update a cell:\n" +
'{"operations": [{"operation": "update", "cell_id": "3", "new_xml": "<mxCell id=\\"3\\" value=\\"New Label\\" style=\\"rounded=1;\\" vertex=\\"1\\" parent=\\"1\\"><mxGeometry x=\\"100\\" y=\\"100\\" width=\\"120\\" height=\\"60\\" as=\\"geometry\\"/></mxCell>"}]}\n\n' +
"Example - Delete a cell:\n" +
'{"operations": [{"operation": "delete", "cell_id": "rect-1"}]}',
inputSchema: {
operations: z
.array(
z.object({
type: z
operation: z
.enum(["update", "add", "delete"])
.describe("Operation type"),
.describe(
"Operation to perform: add, update, or delete",
),
cell_id: z.string().describe("The id of the mxCell"),
new_xml: z
.string()
@@ -356,13 +364,13 @@ server.registerTool(
)
if (fixed) {
log.info(
`Operation ${op.type} ${op.cell_id}: XML auto-fixed: ${fixes.join(", ")}`,
`Operation ${op.operation} ${op.cell_id}: XML auto-fixed: ${fixes.join(", ")}`,
)
return { ...op, new_xml: fixed }
}
if (!valid && error) {
log.warn(
`Operation ${op.type} ${op.cell_id}: XML validation failed: ${error}`,
`Operation ${op.operation} ${op.cell_id}: XML validation failed: ${error}`,
)
}
}

View File

@@ -459,7 +459,8 @@ export function autoFixXml(xml: string): { fixed: string; fixes: string[] } {
fixes.push("Removed quotes around color values in style")
}
// 10. Fix unescaped < in attribute values
// 10. Fix unescaped < and > in attribute values
// < is required to be escaped, > is not strictly required but we escape for consistency
const attrPattern = /(=\s*")([^"]*?)(<)([^"]*?)(")/g
let attrMatch
let hasUnescapedLt = false
@@ -471,10 +472,10 @@ export function autoFixXml(xml: string): { fixed: string; fixes: string[] } {
}
if (hasUnescapedLt) {
fixed = fixed.replace(/=\s*"([^"]*)"/g, (_match, value) => {
const escaped = value.replace(/</g, "&lt;")
const escaped = value.replace(/</g, "&lt;").replace(/>/g, "&gt;")
return `="${escaped}"`
})
fixes.push("Escaped < characters in attribute values")
fixes.push("Escaped <> characters in attribute values")
}
// 11. Fix invalid hex character references
@@ -903,24 +904,30 @@ export function validateAndFixXml(xml: string): {
/**
* Check if mxCell XML output is complete (not truncated).
* Uses a robust approach that handles any LLM provider's wrapper tags
* by finding the last valid mxCell ending and checking if suffix is just closing tags.
* @param xml - The XML string to check (can be undefined/null)
* @returns true if XML appears complete, false if truncated or empty
*/
export function isMxCellXmlComplete(xml: string | undefined | null): boolean {
let trimmed = xml?.trim() || ""
const trimmed = xml?.trim() || ""
if (!trimmed) return false
// Strip wrapper tags if present
let prev = ""
while (prev !== trimmed) {
prev = trimmed
trimmed = trimmed
.replace(/<\/mxParameter>\s*$/i, "")
.replace(/<\/invoke>\s*$/i, "")
.replace(/<\/antml:parameter>\s*$/i, "")
.replace(/<\/antml:invoke>\s*$/i, "")
.trim()
}
// Find position of last complete mxCell ending (either /> or </mxCell>)
const lastSelfClose = trimmed.lastIndexOf("/>")
const lastMxCellClose = trimmed.lastIndexOf("</mxCell>")
return trimmed.endsWith("/>") || trimmed.endsWith("</mxCell>")
const lastValidEnd = Math.max(lastSelfClose, lastMxCellClose)
// No valid ending found at all
if (lastValidEnd === -1) return false
// Check what comes after the last valid ending
// For />: add 2 chars, for </mxCell>: add 9 chars
const endOffset = lastMxCellClose > lastSelfClose ? 9 : 2
const suffix = trimmed.slice(lastValidEnd + endOffset)
// If suffix is empty or only contains closing tags (any provider's wrapper) or whitespace, it's complete
// This regex matches any sequence of closing XML tags like </foo>, </bar>, </DSMLxyz>
return /^(\s*<\/[^>]+>)*\s*$/.test(suffix)
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 111 KiB

View File

@@ -2,17 +2,13 @@
/**
* Development script for running Electron with Next.js
* 1. Reads preset configuration (if exists)
* 2. Starts Next.js dev server with preset env vars
* 3. Waits for it to be ready
* 4. Compiles Electron TypeScript
* 5. Launches Electron
* 6. Watches for preset changes and restarts Next.js
* 1. Starts Next.js dev server
* 2. Waits for it to be ready
* 3. Compiles Electron TypeScript
* 4. Launches Electron
*/
import { spawn } from "node:child_process"
import { existsSync, readFileSync, watch } from "node:fs"
import os from "node:os"
import path from "node:path"
import { fileURLToPath } from "node:url"
@@ -22,64 +18,6 @@ const rootDir = path.join(__dirname, "..")
const NEXT_PORT = 6002
const NEXT_URL = `http://localhost:${NEXT_PORT}`
/**
* Get the user data path (same as Electron's app.getPath("userData"))
*/
function getUserDataPath() {
const appName = "next-ai-draw-io"
switch (process.platform) {
case "darwin":
return path.join(
os.homedir(),
"Library",
"Application Support",
appName,
)
case "win32":
return path.join(
process.env.APPDATA ||
path.join(os.homedir(), "AppData", "Roaming"),
appName,
)
default:
return path.join(os.homedir(), ".config", appName)
}
}
/**
* Load preset configuration from config file
*/
function loadPresetConfig() {
const configPath = path.join(getUserDataPath(), "config-presets.json")
if (!existsSync(configPath)) {
console.log("📋 No preset configuration found, using .env.local")
return null
}
try {
const content = readFileSync(configPath, "utf-8")
const data = JSON.parse(content)
if (!data.currentPresetId) {
console.log("📋 No active preset, using .env.local")
return null
}
const preset = data.presets.find((p) => p.id === data.currentPresetId)
if (!preset) {
console.log("📋 Active preset not found, using .env.local")
return null
}
console.log(`📋 Using preset: "${preset.name}"`)
return preset.config
} catch (error) {
console.error("Failed to load preset config:", error.message)
return null
}
}
/**
* Wait for the Next.js server to be ready
*/
@@ -129,25 +67,14 @@ function runCommand(command, args, options = {}) {
}
/**
* Start Next.js dev server with preset environment
* Start Next.js dev server
*/
function startNextServer(presetEnv) {
const env = { ...process.env }
// Apply preset environment variables
if (presetEnv) {
for (const [key, value] of Object.entries(presetEnv)) {
if (value !== undefined && value !== "") {
env[key] = value
}
}
}
function startNextServer() {
const nextProcess = spawn("npm", ["run", "dev"], {
cwd: rootDir,
stdio: "inherit",
shell: true,
env,
env: process.env,
})
nextProcess.on("error", (err) => {
@@ -163,12 +90,9 @@ function startNextServer(presetEnv) {
async function main() {
console.log("🚀 Starting Electron development environment...\n")
// Load preset configuration
const presetEnv = loadPresetConfig()
// Start Next.js dev server with preset env
// Start Next.js dev server
console.log("1. Starting Next.js development server...")
let nextProcess = startNextServer(presetEnv)
const nextProcess = startNextServer()
// Wait for Next.js to be ready
try {
@@ -203,75 +127,14 @@ async function main() {
},
})
// Watch for preset config changes
const configPath = path.join(getUserDataPath(), "config-presets.json")
let configWatcher = null
let restartPending = false
function setupConfigWatcher() {
if (!existsSync(path.dirname(configPath))) {
// Directory doesn't exist yet, check again later
setTimeout(setupConfigWatcher, 5000)
return
}
try {
configWatcher = watch(
configPath,
{ persistent: false },
async (eventType) => {
if (eventType === "change" && !restartPending) {
restartPending = true
console.log(
"\n🔄 Preset configuration changed, restarting Next.js server...",
)
// Kill current Next.js process
nextProcess.kill()
// Wait a bit for process to die
await new Promise((r) => setTimeout(r, 1000))
// Reload preset and restart
const newPresetEnv = loadPresetConfig()
nextProcess = startNextServer(newPresetEnv)
try {
await waitForServer(NEXT_URL)
console.log(
"✅ Next.js server restarted with new configuration\n",
)
} catch (err) {
console.error(
"❌ Failed to restart Next.js:",
err.message,
)
}
restartPending = false
}
},
)
console.log("👀 Watching for preset configuration changes...")
} catch (err) {
// File might not exist yet, that's ok
setTimeout(setupConfigWatcher, 5000)
}
}
// Start watching after a delay (config file might not exist yet)
setTimeout(setupConfigWatcher, 2000)
electronProcess.on("close", (code) => {
console.log(`\nElectron exited with code ${code}`)
if (configWatcher) configWatcher.close()
nextProcess.kill()
process.exit(code || 0)
})
electronProcess.on("error", (err) => {
console.error("Electron error:", err)
if (configWatcher) configWatcher.close()
nextProcess.kill()
process.exit(1)
})
@@ -279,7 +142,6 @@ async function main() {
// Handle termination signals
const cleanup = () => {
console.log("\n🛑 Shutting down...")
if (configWatcher) configWatcher.close()
electronProcess.kill()
nextProcess.kill()
process.exit(0)

View File

@@ -22,7 +22,7 @@ function applyDiagramOperations(xmlContent, operations) {
result: xmlContent,
errors: [
{
type: "update",
operation: "update",
cellId: "",
message: `XML parse error: ${parseError.textContent}`,
},
@@ -36,7 +36,7 @@ function applyDiagramOperations(xmlContent, operations) {
result: xmlContent,
errors: [
{
type: "update",
operation: "update",
cellId: "",
message: "Could not find <root> element in XML",
},
@@ -51,11 +51,11 @@ function applyDiagramOperations(xmlContent, operations) {
})
for (const op of operations) {
if (op.type === "update") {
if (op.operation === "update") {
const existingCell = cellMap.get(op.cell_id)
if (!existingCell) {
errors.push({
type: "update",
operation: "update",
cellId: op.cell_id,
message: `Cell with id="${op.cell_id}" not found`,
})
@@ -63,7 +63,7 @@ function applyDiagramOperations(xmlContent, operations) {
}
if (!op.new_xml) {
errors.push({
type: "update",
operation: "update",
cellId: op.cell_id,
message: "new_xml is required for update operation",
})
@@ -76,7 +76,7 @@ function applyDiagramOperations(xmlContent, operations) {
const newCell = newDoc.querySelector("mxCell")
if (!newCell) {
errors.push({
type: "update",
operation: "update",
cellId: op.cell_id,
message: "new_xml must contain an mxCell element",
})
@@ -85,7 +85,7 @@ function applyDiagramOperations(xmlContent, operations) {
const newCellId = newCell.getAttribute("id")
if (newCellId !== op.cell_id) {
errors.push({
type: "update",
operation: "update",
cellId: op.cell_id,
message: `ID mismatch: cell_id is "${op.cell_id}" but new_xml has id="${newCellId}"`,
})
@@ -94,10 +94,10 @@ function applyDiagramOperations(xmlContent, operations) {
const importedNode = doc.importNode(newCell, true)
existingCell.parentNode?.replaceChild(importedNode, existingCell)
cellMap.set(op.cell_id, importedNode)
} else if (op.type === "add") {
} else if (op.operation === "add") {
if (cellMap.has(op.cell_id)) {
errors.push({
type: "add",
operation: "add",
cellId: op.cell_id,
message: `Cell with id="${op.cell_id}" already exists`,
})
@@ -105,7 +105,7 @@ function applyDiagramOperations(xmlContent, operations) {
}
if (!op.new_xml) {
errors.push({
type: "add",
operation: "add",
cellId: op.cell_id,
message: "new_xml is required for add operation",
})
@@ -118,7 +118,7 @@ function applyDiagramOperations(xmlContent, operations) {
const newCell = newDoc.querySelector("mxCell")
if (!newCell) {
errors.push({
type: "add",
operation: "add",
cellId: op.cell_id,
message: "new_xml must contain an mxCell element",
})
@@ -127,7 +127,7 @@ function applyDiagramOperations(xmlContent, operations) {
const newCellId = newCell.getAttribute("id")
if (newCellId !== op.cell_id) {
errors.push({
type: "add",
operation: "add",
cellId: op.cell_id,
message: `ID mismatch: cell_id is "${op.cell_id}" but new_xml has id="${newCellId}"`,
})
@@ -136,11 +136,11 @@ function applyDiagramOperations(xmlContent, operations) {
const importedNode = doc.importNode(newCell, true)
root.appendChild(importedNode)
cellMap.set(op.cell_id, importedNode)
} else if (op.type === "delete") {
} else if (op.operation === "delete") {
const existingCell = cellMap.get(op.cell_id)
if (!existingCell) {
errors.push({
type: "delete",
operation: "delete",
cellId: op.cell_id,
message: `Cell with id="${op.cell_id}" not found`,
})
@@ -201,7 +201,7 @@ function assert(condition, message) {
test("Update operation changes cell value", () => {
const { result, errors } = applyDiagramOperations(sampleXml, [
{
type: "update",
operation: "update",
cell_id: "2",
new_xml:
'<mxCell id="2" value="Updated Box A" style="rounded=1;" vertex="1" parent="1"><mxGeometry x="100" y="100" width="120" height="60" as="geometry"/></mxCell>',
@@ -224,7 +224,7 @@ test("Update operation changes cell value", () => {
test("Update operation fails for non-existent cell", () => {
const { errors } = applyDiagramOperations(sampleXml, [
{
type: "update",
operation: "update",
cell_id: "999",
new_xml: '<mxCell id="999" value="Test"/>',
},
@@ -239,7 +239,7 @@ test("Update operation fails for non-existent cell", () => {
test("Update operation fails on ID mismatch", () => {
const { errors } = applyDiagramOperations(sampleXml, [
{
type: "update",
operation: "update",
cell_id: "2",
new_xml: '<mxCell id="WRONG" value="Test"/>',
},
@@ -254,7 +254,7 @@ test("Update operation fails on ID mismatch", () => {
test("Add operation creates new cell", () => {
const { result, errors } = applyDiagramOperations(sampleXml, [
{
type: "add",
operation: "add",
cell_id: "new1",
new_xml:
'<mxCell id="new1" value="New Box" style="rounded=1;" vertex="1" parent="1"><mxGeometry x="500" y="100" width="120" height="60" as="geometry"/></mxCell>',
@@ -274,7 +274,7 @@ test("Add operation creates new cell", () => {
test("Add operation fails for duplicate ID", () => {
const { errors } = applyDiagramOperations(sampleXml, [
{
type: "add",
operation: "add",
cell_id: "2",
new_xml: '<mxCell id="2" value="Duplicate"/>',
},
@@ -289,7 +289,7 @@ test("Add operation fails for duplicate ID", () => {
test("Add operation fails on ID mismatch", () => {
const { errors } = applyDiagramOperations(sampleXml, [
{
type: "add",
operation: "add",
cell_id: "new1",
new_xml: '<mxCell id="WRONG" value="Test"/>',
},
@@ -303,7 +303,7 @@ test("Add operation fails on ID mismatch", () => {
test("Delete operation removes cell", () => {
const { result, errors } = applyDiagramOperations(sampleXml, [
{ type: "delete", cell_id: "3" },
{ operation: "delete", cell_id: "3" },
])
assert(
errors.length === 0,
@@ -315,7 +315,7 @@ test("Delete operation removes cell", () => {
test("Delete operation fails for non-existent cell", () => {
const { errors } = applyDiagramOperations(sampleXml, [
{ type: "delete", cell_id: "999" },
{ operation: "delete", cell_id: "999" },
])
assert(errors.length === 1, "Should have one error")
assert(
@@ -327,18 +327,18 @@ test("Delete operation fails for non-existent cell", () => {
test("Multiple operations in sequence", () => {
const { result, errors } = applyDiagramOperations(sampleXml, [
{
type: "update",
operation: "update",
cell_id: "2",
new_xml:
'<mxCell id="2" value="Updated" style="rounded=1;" vertex="1" parent="1"><mxGeometry x="100" y="100" width="120" height="60" as="geometry"/></mxCell>',
},
{
type: "add",
operation: "add",
cell_id: "new1",
new_xml:
'<mxCell id="new1" value="Added" style="rounded=1;" vertex="1" parent="1"><mxGeometry x="500" y="100" width="120" height="60" as="geometry"/></mxCell>',
},
{ type: "delete", cell_id: "3" },
{ operation: "delete", cell_id: "3" },
])
assert(
errors.length === 0,
@@ -354,14 +354,14 @@ test("Multiple operations in sequence", () => {
test("Invalid XML returns parse error", () => {
const { errors } = applyDiagramOperations("<not valid xml", [
{ type: "delete", cell_id: "1" },
{ operation: "delete", cell_id: "1" },
])
assert(errors.length === 1, "Should have one error")
})
test("Missing root element returns error", () => {
const { errors } = applyDiagramOperations("<mxfile></mxfile>", [
{ type: "delete", cell_id: "1" },
{ operation: "delete", cell_id: "1" },
])
assert(errors.length === 1, "Should have one error")
assert(