Compare commits

..

20 Commits

Author SHA1 Message Date
dayuan.jiang
705b42e4ce fix: restore status notice indicator removed in PR #77 2025-12-05 23:13:37 +09:00
Dayuan Jiang
3f35c52527 feat: add draw.io theme toggle between minimal and sketch (#106)
- Add toggle button in chat input area to switch between min and sketch themes
- Show warning dialog before switching (clears messages and diagram)
- Persist theme selection in localStorage
- Default theme is minimal (hides shapes sidebar)
2025-12-05 23:10:48 +09:00
Dayuan Jiang
0af5229477 feat: add markdown rendering and resizable chat panel (#104)
* feat: add markdown rendering for chat messages

- Add react-markdown and @tailwindcss/typography for markdown support
- Use prose styling for assistant message formatting
- Fix Radix ScrollArea viewport horizontal overflow issue
- Add CSS fix for viewport width constraint

* feat: add resizable chat panel

- Replace fixed width layout with react-resizable-panels
- Chat panel can be resized by dragging the handle
- Panel is collapsible with min 15% and max 50% width
- Ctrl+B keyboard shortcut still works for toggle
2025-12-05 22:42:39 +09:00
Twelveeee
3fb349fb3e clear button cant clear error msg & feat: add setting dialog and add accesscode (#77)
* fix: clear button cant clear error msg

* new: add setting dialog and add accesscode

* fix: address review feedback - dark mode, types, formatting

* feat: only show Settings button when access code is required

* refactor: rename ACCESS_CODES to ACCESS_CODE_LIST

---------

Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2025-12-05 22:09:34 +09:00
Dayuan Jiang
ed29e32ba3 feat: restore Langfuse observability integration (#103)
- Add lib/langfuse.ts with client, trace input/output, telemetry config
- Add instrumentation.ts for OpenTelemetry setup with Langfuse span processor
- Add /api/log-save endpoint for logging diagram saves
- Add /api/log-feedback endpoint for thumbs up/down feedback
- Update chat route with sessionId tracking and telemetry
- Add feedback buttons (thumbs up/down) to chat messages
- Add sessionId tracking throughout the app
- Update env.example with Langfuse configuration
- Add @langfuse/client, @langfuse/otel, @langfuse/tracing, @opentelemetry/sdk-trace-node
2025-12-05 21:15:02 +09:00
Dayuan Jiang
4cd78dc561 chore: remove complex 503 error handling code (#102)
- Remove 15s streaming timeout detection (too slow, added complexity)
- Remove status indicator (issue resolved by switching model)
- Remove streamingError state and related refs
- Simplify onFinish callback (remove 503 detection logging)
- Remove errorHandler function (use default AI SDK errors)

The real fix was switching from global.* to us.* Bedrock model.
This removes ~134 lines of unnecessary complexity.
2025-12-05 20:18:19 +09:00
Dayuan Jiang
e0c5d966e3 feat: add image upload validation with 2MB limit and max 5 files (#101)
- Add 2MB file size limit with client and server-side validation
- Add max 5 files limit per upload
- Add sonner toast library for better error notifications
- Create ErrorToast component with keyboard accessibility
- Batch multiple validation errors into single toast
- Validate file size in all upload methods (input, paste, drag-drop)
- Add server-side validation in /api/chat endpoint
2025-12-05 19:30:50 +09:00
Dayuan Jiang
33471d5b3a docs: add AI provider configuration guide (#100)
- Add docs/ai-providers.md with detailed setup instructions for all providers
- Update README.md, README_CN.md, README_JA.md with provider guide links
- Add model capability requirements note
- Simplify provider list in READMEs

Closes #79
2025-12-05 18:53:34 +09:00
Dayuan Jiang
3ef9908df7 feat: add confirmation dialog to prevent accidental back navigation (#99)
Addresses conflict between right-click drag and browser back gesture in
Chromium-based browsers. Shows browser confirmation dialog when user
tries to navigate away, preventing accidental page exits.

Closes #80
2025-12-05 18:42:36 +09:00
Dayuan Jiang
57bfc9cef7 fix: update status indicator to show outage resolved (#98) 2025-12-05 18:07:25 +09:00
Dayuan Jiang
0543f71c43 fix: use console.log instead of console.error for XML validation during streaming (#96) 2025-12-05 16:59:14 +09:00
Dayuan Jiang
970b88612d fix: add service status indicator for ongoing issues (#95) 2025-12-05 16:46:17 +09:00
Dayuan Jiang
c805277a76 fix: enable UI retry when Bedrock returns early 503 error (#94)
- Add error prop to ChatInput to detect error state
- Update isDisabled logic to allow retry when there's an error
- Pass combined error (SDK error + streamingError) to ChatInput

When Bedrock returns 503 ServiceUnavailableException before streaming
starts, AI SDK's onError fires but status may not transition to "ready".
This fix ensures the input is re-enabled when an error occurs, allowing
users to retry their request.
2025-12-05 16:22:38 +09:00
Dayuan Jiang
95160f5a21 fix: handle Bedrock 503 streaming errors with timeout detection (#92)
- Add 15s streaming timeout to detect mid-stream stalls (e.g., Bedrock 503)
- Add stop() call to allow user retry after timeout
- Add streamingError state for timeout-detected errors
- Improve server-side error logging for empty usage detection
- Add user-friendly error messages for ServiceUnavailable and Throttling errors
2025-12-05 14:23:47 +09:00
broBinChen
b206e16c02 fix: clear files when clicking text-only examples (#82)
Fixed an issue where files from previous examples would persist when clicking on "Animated Diagram" or "Creative Drawing" examples that don't require image uploads.
2025-12-05 14:07:14 +09:00
broBinChen
563b18e8ff refactor: replace deprecated addToolResult with addToolOutput (#85)
Replaced the deprecated addToolResult API with the new addToolOutput API from ai to ensure compatibility with future versions.
2025-12-05 14:02:45 +09:00
dayuan.jiang
2366255e8f fix: use credential provider chain for bedrock IAM role support 2025-12-05 09:19:26 +09:00
dayuan.jiang
255308f829 fix: make bedrock credentials optional for IAM role support 2025-12-05 09:11:10 +09:00
dayuan.jiang
a9493c8877 fix: write env vars to .env.production for Amplify SSR runtime 2025-12-05 09:04:54 +09:00
dayuan.jiang
a0c3db100a fix: add favicon.ico to public folder for header logo 2025-12-05 08:56:34 +09:00
29 changed files with 3342 additions and 158 deletions

View File

@@ -81,7 +81,7 @@ Diagrams are represented as XML that can be rendered in draw.io. The AI processe
## Multi-Provider Support ## Multi-Provider Support
- AWS Bedrock (default) - AWS Bedrock (default)
- OpenAI / OpenAI-compatible APIs (via `OPENAI_BASE_URL`) - OpenAI
- Anthropic - Anthropic
- Google AI - Google AI
- Azure OpenAI - Azure OpenAI
@@ -89,6 +89,12 @@ Diagrams are represented as XML that can be rendered in draw.io. The AI processe
- OpenRouter - OpenRouter
- DeepSeek - DeepSeek
All providers except AWS Bedrock and OpenRouter support custom endpoints.
📖 **[Detailed Provider Configuration Guide](./docs/ai-providers.md)** - See setup instructions for each provider.
**Model Requirements**: This task requires strong model capabilities for generating long-form text with strict formatting constraints (draw.io XML). Recommended models include Claude Sonnet 4.5, GPT-4o, Gemini 2.0, and DeepSeek V3/R1.
Note that `claude-sonnet-4-5` has trained on draw.io diagrams with AWS logos, so if you want to create AWS architecture diagrams, this is the best choice. Note that `claude-sonnet-4-5` has trained on draw.io diagrams with AWS logos, so if you want to create AWS architecture diagrams, this is the best choice.
## Getting Started ## Getting Started
@@ -143,8 +149,11 @@ Edit `.env.local` and configure your chosen provider:
- Set `AI_PROVIDER` to your chosen provider (bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek) - Set `AI_PROVIDER` to your chosen provider (bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek)
- Set `AI_MODEL` to the specific model you want to use - Set `AI_MODEL` to the specific model you want to use
- Add the required API keys for your provider - Add the required API keys for your provider
- `ACCESS_CODE_LIST`: Optional access password(s), can be comma-separated for multiple passwords.
See the [Multi-Provider Support](#multi-provider-support) section above for provider-specific configuration examples. > Warning: If you do not set `ACCESS_CODE_LIST`, anyone can access your deployed site directly, which may lead to rapid depletion of your token. It is recommended to set this option.
See the [Provider Configuration Guide](./docs/ai-providers.md) for detailed setup instructions for each provider.
4. Run the development server: 4. Run the development server:

View File

@@ -81,7 +81,7 @@ https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
## 多提供商支持 ## 多提供商支持
- AWS Bedrock默认 - AWS Bedrock默认
- OpenAI / OpenAI兼容API通过 `OPENAI_BASE_URL` - OpenAI
- Anthropic - Anthropic
- Google AI - Google AI
- Azure OpenAI - Azure OpenAI
@@ -89,6 +89,12 @@ https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
- OpenRouter - OpenRouter
- DeepSeek - DeepSeek
除AWS Bedrock和OpenRouter外所有提供商都支持自定义端点。
📖 **[详细的提供商配置指南](./docs/ai-providers.md)** - 查看各提供商的设置说明。
**模型要求**此任务需要强大的模型能力因为它涉及生成具有严格格式约束的长文本draw.io XML。推荐使用Claude Sonnet 4.5、GPT-4o、Gemini 2.0和DeepSeek V3/R1。
注意:`claude-sonnet-4-5` 已在带有AWS标志的draw.io图表上进行训练因此如果您想创建AWS架构图这是最佳选择。 注意:`claude-sonnet-4-5` 已在带有AWS标志的draw.io图表上进行训练因此如果您想创建AWS架构图这是最佳选择。
## 快速开始 ## 快速开始
@@ -143,8 +149,11 @@ cp env.example .env.local
-`AI_PROVIDER` 设置为您选择的提供商bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek -`AI_PROVIDER` 设置为您选择的提供商bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek
-`AI_MODEL` 设置为您要使用的特定模型 -`AI_MODEL` 设置为您要使用的特定模型
- 添加您的提供商所需的API密钥 - 添加您的提供商所需的API密钥
- `ACCESS_CODE_LIST` 访问密码,可选,可以使用逗号隔开多个密码。
请参阅上面的[多提供商支持](#多提供商支持)部分了解特定提供商的配置示例 > 警告:如果不填写 `ACCESS_CODE_LIST`,则任何人都可以直接使用你部署后的网站,可能会导致你的 token 被急速消耗完毕,建议填写此选项
详细设置说明请参阅[提供商配置指南](./docs/ai-providers.md)。
4. 运行开发服务器: 4. 运行开发服务器:

View File

@@ -81,7 +81,7 @@ https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
## マルチプロバイダーサポート ## マルチプロバイダーサポート
- AWS Bedrockデフォルト - AWS Bedrockデフォルト
- OpenAI / OpenAI互換API`OPENAI_BASE_URL`経由) - OpenAI
- Anthropic - Anthropic
- Google AI - Google AI
- Azure OpenAI - Azure OpenAI
@@ -89,6 +89,12 @@ https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
- OpenRouter - OpenRouter
- DeepSeek - DeepSeek
AWS BedrockとOpenRouter以外のすべてのプロバイダーはカスタムエンドポイントをサポートしています。
📖 **[詳細なプロバイダー設定ガイド](./docs/ai-providers.md)** - 各プロバイダーの設定手順をご覧ください。
**モデル要件**このタスクは厳密なフォーマット制約draw.io XMLを持つ長文テキスト生成を伴うため、強力なモデル機能が必要です。Claude Sonnet 4.5、GPT-4o、Gemini 2.0、DeepSeek V3/R1を推奨します。
注:`claude-sonnet-4-5`はAWSロゴ付きのdraw.ioダイアグラムで学習されているため、AWSアーキテクチャダイアグラムを作成したい場合は最適な選択です。 注:`claude-sonnet-4-5`はAWSロゴ付きのdraw.ioダイアグラムで学習されているため、AWSアーキテクチャダイアグラムを作成したい場合は最適な選択です。
## はじめに ## はじめに
@@ -143,8 +149,11 @@ cp env.example .env.local
- `AI_PROVIDER`を選択したプロバイダーに設定bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek - `AI_PROVIDER`を選択したプロバイダーに設定bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek
- `AI_MODEL`を使用する特定のモデルに設定 - `AI_MODEL`を使用する特定のモデルに設定
- プロバイダーに必要なAPIキーを追加 - プロバイダーに必要なAPIキーを追加
- `ACCESS_CODE_LIST` アクセスパスワード(オプション)。カンマ区切りで複数のパスワードを指定できます。
プロバイダー固有の設定例については、上記の[マルチプロバイダーサポート](#マルチプロバイダーサポート)セクションを参照してください > 警告:`ACCESS_CODE_LIST`を設定しない場合、誰でもデプロイされたサイトに直接アクセスできるため、トークンが急速に消費される可能性があります。このオプションを設定することをお勧めします
詳細な設定手順については[プロバイダー設定ガイド](./docs/ai-providers.md)を参照してください。
4. 開発サーバーを起動: 4. 開発サーバーを起動:

22
amplify.yml Normal file
View File

@@ -0,0 +1,22 @@
version: 1
frontend:
phases:
preBuild:
commands:
- npm ci --cache .npm --prefer-offline
build:
commands:
# Write env vars to .env.production for Next.js SSR runtime
- env | grep -e AI_MODEL >> .env.production
- env | grep -e AI_PROVIDER >> .env.production
- env | grep -e OPENAI_API_KEY >> .env.production
- env | grep -e NEXT_PUBLIC_ >> .env.production
- npm run build
artifacts:
baseDirectory: .next
files:
- '**/*'
cache:
paths:
- .next/cache/**/*
- .npm/**/*

View File

@@ -1,11 +1,42 @@
import { streamText, convertToModelMessages, createUIMessageStream, createUIMessageStreamResponse } from 'ai'; import { streamText, convertToModelMessages, createUIMessageStream, createUIMessageStreamResponse } from 'ai';
import { getAIModel } from '@/lib/ai-providers'; import { getAIModel } from '@/lib/ai-providers';
import { findCachedResponse } from '@/lib/cached-responses'; import { findCachedResponse } from '@/lib/cached-responses';
import { setTraceInput, setTraceOutput, getTelemetryConfig, wrapWithObserve } from '@/lib/langfuse';
import { getSystemPrompt } from '@/lib/system-prompts'; import { getSystemPrompt } from '@/lib/system-prompts';
import { z } from "zod"; import { z } from "zod";
export const maxDuration = 300; export const maxDuration = 300;
// File upload limits (must match client-side)
const MAX_FILE_SIZE = 2 * 1024 * 1024; // 2MB
const MAX_FILES = 5;
// Helper function to validate file parts in messages
function validateFileParts(messages: any[]): { valid: boolean; error?: string } {
const lastMessage = messages[messages.length - 1];
const fileParts = lastMessage?.parts?.filter((p: any) => p.type === 'file') || [];
if (fileParts.length > MAX_FILES) {
return { valid: false, error: `Too many files. Maximum ${MAX_FILES} allowed.` };
}
for (const filePart of fileParts) {
// Data URLs format: data:image/png;base64,<data>
// Base64 increases size by ~33%, so we check the decoded size
if (filePart.url && filePart.url.startsWith('data:')) {
const base64Data = filePart.url.split(',')[1];
if (base64Data) {
const sizeInBytes = Math.ceil((base64Data.length * 3) / 4);
if (sizeInBytes > MAX_FILE_SIZE) {
return { valid: false, error: `File exceeds ${MAX_FILE_SIZE / 1024 / 1024}MB limit.` };
}
}
}
}
return { valid: true };
}
// Helper function to check if diagram is minimal/empty // Helper function to check if diagram is minimal/empty
function isMinimalDiagram(xml: string): boolean { function isMinimalDiagram(xml: string): boolean {
const stripped = xml.replace(/\s/g, ''); const stripped = xml.replace(/\s/g, '');
@@ -31,7 +62,46 @@ function createCachedStreamResponse(xml: string): Response {
// Inner handler function // Inner handler function
async function handleChatRequest(req: Request): Promise<Response> { async function handleChatRequest(req: Request): Promise<Response> {
const { messages, xml } = await req.json(); // Check for access code
const accessCodes = process.env.ACCESS_CODE_LIST?.split(',').map(code => code.trim()).filter(Boolean) || [];
if (accessCodes.length > 0) {
const accessCodeHeader = req.headers.get('x-access-code');
if (!accessCodeHeader || !accessCodes.includes(accessCodeHeader)) {
return Response.json(
{ error: 'Invalid or missing access code. Please configure it in Settings.' },
{ status: 401 }
);
}
}
const { messages, xml, sessionId } = await req.json();
// Get user IP for Langfuse tracking
const forwardedFor = req.headers.get('x-forwarded-for');
const userId = forwardedFor?.split(',')[0]?.trim() || 'anonymous';
// Validate sessionId for Langfuse (must be string, max 200 chars)
const validSessionId = sessionId && typeof sessionId === 'string' && sessionId.length <= 200
? sessionId
: undefined;
// Extract user input text for Langfuse trace
const currentMessage = messages[messages.length - 1];
const userInputText = currentMessage?.parts?.find((p: any) => p.type === 'text')?.text || '';
// Update Langfuse trace with input, session, and user
setTraceInput({
input: userInputText,
sessionId: validSessionId,
userId: userId,
});
// === FILE VALIDATION START ===
const fileValidation = validateFileParts(messages);
if (!fileValidation.valid) {
return Response.json({ error: fileValidation.error }, { status: 400 });
}
// === FILE VALIDATION END ===
// === CACHE CHECK START === // === CACHE CHECK START ===
const isFirstMessage = messages.length === 1; const isFirstMessage = messages.length === 1;
@@ -154,9 +224,19 @@ ${lastMessageText}
messages: allMessages, messages: allMessages,
...(providerOptions && { providerOptions }), ...(providerOptions && { providerOptions }),
...(headers && { headers }), ...(headers && { headers }),
onFinish: ({ usage, providerMetadata }) => { // Langfuse telemetry config (returns undefined if not configured)
...(getTelemetryConfig({ sessionId: validSessionId, userId }) && {
experimental_telemetry: getTelemetryConfig({ sessionId: validSessionId, userId }),
}),
onFinish: ({ text, usage, providerMetadata }) => {
console.log('[Cache] Full providerMetadata:', JSON.stringify(providerMetadata, null, 2)); console.log('[Cache] Full providerMetadata:', JSON.stringify(providerMetadata, null, 2));
console.log('[Cache] Usage:', JSON.stringify(usage, null, 2)); console.log('[Cache] Usage:', JSON.stringify(usage, null, 2));
// Pass usage to Langfuse (Bedrock streaming doesn't auto-report tokens to telemetry)
// AI SDK uses inputTokens/outputTokens, Langfuse expects promptTokens/completionTokens
setTraceOutput(text, {
promptTokens: usage?.inputTokens,
completionTokens: usage?.outputTokens,
});
}, },
tools: { tools: {
// Client-side tool that will be executed on the client // Client-side tool that will be executed on the client
@@ -220,34 +300,11 @@ IMPORTANT: Keep edits concise:
temperature: 0, temperature: 0,
}); });
// Error handler function to provide detailed error messages return result.toUIMessageStreamResponse();
function errorHandler(error: unknown) {
if (error == null) {
return 'unknown error';
}
const errorString = typeof error === 'string'
? error
: error instanceof Error
? error.message
: JSON.stringify(error);
// Check for image not supported error (e.g., DeepSeek models)
if (errorString.includes('image_url') ||
errorString.includes('unknown variant') ||
(errorString.includes('image') && errorString.includes('not supported'))) {
return 'This model does not support image inputs. Please remove the image and try again, or switch to a vision-capable model.';
}
return errorString;
}
return result.toUIMessageStreamResponse({
onError: errorHandler,
});
} }
export async function POST(req: Request) { // Wrap handler with error handling
async function safeHandler(req: Request): Promise<Response> {
try { try {
return await handleChatRequest(req); return await handleChatRequest(req);
} catch (error) { } catch (error) {
@@ -255,3 +312,10 @@ export async function POST(req: Request) {
return Response.json({ error: 'Internal server error' }, { status: 500 }); return Response.json({ error: 'Internal server error' }, { status: 500 });
} }
} }
// Wrap with Langfuse observe (if configured)
const observedHandler = wrapWithObserve(safeHandler);
export async function POST(req: Request) {
return observedHandler(req);
}

9
app/api/config/route.ts Normal file
View File

@@ -0,0 +1,9 @@
import { NextResponse } from "next/server";
export async function GET() {
const accessCodes = process.env.ACCESS_CODE_LIST?.split(',').map(code => code.trim()).filter(Boolean) || [];
return NextResponse.json({
accessCodeRequired: accessCodes.length > 0,
});
}

View File

@@ -0,0 +1,103 @@
import { getLangfuseClient } from '@/lib/langfuse';
import { randomUUID } from 'crypto';
import { z } from 'zod';
const feedbackSchema = z.object({
messageId: z.string().min(1).max(200),
feedback: z.enum(['good', 'bad']),
sessionId: z.string().min(1).max(200).optional(),
});
export async function POST(req: Request) {
const langfuse = getLangfuseClient();
if (!langfuse) {
return Response.json({ success: true, logged: false });
}
// Validate input
let data;
try {
data = feedbackSchema.parse(await req.json());
} catch {
return Response.json({ success: false, error: 'Invalid input' }, { status: 400 });
}
const { messageId, feedback, sessionId } = data;
// Get user IP for tracking
const forwardedFor = req.headers.get('x-forwarded-for');
const userId = forwardedFor?.split(',')[0]?.trim() || 'anonymous';
try {
// Find the most recent chat trace for this session to attach the score to
const tracesResponse = await langfuse.api.trace.list({
sessionId,
limit: 1,
});
const traces = tracesResponse.data || [];
const latestTrace = traces[0];
if (!latestTrace) {
// No trace found for this session - create a standalone feedback trace
const traceId = randomUUID();
const timestamp = new Date().toISOString();
await langfuse.api.ingestion.batch({
batch: [
{
type: 'trace-create',
id: randomUUID(),
timestamp,
body: {
id: traceId,
name: 'user-feedback',
sessionId,
userId,
input: { messageId, feedback },
metadata: { source: 'feedback-button', note: 'standalone - no chat trace found' },
timestamp,
},
},
{
type: 'score-create',
id: randomUUID(),
timestamp,
body: {
id: randomUUID(),
traceId,
name: 'user-feedback',
value: feedback === 'good' ? 1 : 0,
comment: `User gave ${feedback} feedback`,
},
},
],
});
} else {
// Attach score to the existing chat trace
const timestamp = new Date().toISOString();
await langfuse.api.ingestion.batch({
batch: [
{
type: 'score-create',
id: randomUUID(),
timestamp,
body: {
id: randomUUID(),
traceId: latestTrace.id,
name: 'user-feedback',
value: feedback === 'good' ? 1 : 0,
comment: `User gave ${feedback} feedback`,
},
},
],
});
}
return Response.json({ success: true, logged: true });
} catch (error) {
console.error('Langfuse feedback error:', error);
return Response.json({ success: false, error: 'Failed to log feedback' }, { status: 500 });
}
}

65
app/api/log-save/route.ts Normal file
View File

@@ -0,0 +1,65 @@
import { getLangfuseClient } from '@/lib/langfuse';
import { randomUUID } from 'crypto';
import { z } from 'zod';
const saveSchema = z.object({
filename: z.string().min(1).max(255),
format: z.enum(['drawio', 'png', 'svg']),
sessionId: z.string().min(1).max(200).optional(),
});
export async function POST(req: Request) {
const langfuse = getLangfuseClient();
if (!langfuse) {
return Response.json({ success: true, logged: false });
}
// Validate input
let data;
try {
data = saveSchema.parse(await req.json());
} catch {
return Response.json({ success: false, error: 'Invalid input' }, { status: 400 });
}
const { filename, format, sessionId } = data;
try {
const timestamp = new Date().toISOString();
// Find the most recent chat trace for this session to attach the save flag
const tracesResponse = await langfuse.api.trace.list({
sessionId,
limit: 1,
});
const traces = tracesResponse.data || [];
const latestTrace = traces[0];
if (latestTrace) {
// Add a score to the existing trace to flag that user saved
await langfuse.api.ingestion.batch({
batch: [
{
type: 'score-create',
id: randomUUID(),
timestamp,
body: {
id: randomUUID(),
traceId: latestTrace.id,
name: 'diagram-saved',
value: 1,
comment: `User saved diagram as ${filename}.${format}`,
},
},
],
});
}
// If no trace found, skip logging (user hasn't chatted yet)
return Response.json({ success: true, logged: !!latestTrace });
} catch (error) {
console.error('Langfuse save error:', error);
return Response.json({ success: false, error: 'Failed to log save' }, { status: 500 });
}
}

View File

@@ -1,6 +1,7 @@
@import "tailwindcss"; @import "tailwindcss";
@plugin "tailwindcss-animate"; @plugin "tailwindcss-animate";
@plugin "@tailwindcss/typography";
@custom-variant dark (&:is(.dark *)); @custom-variant dark (&:is(.dark *));
@@ -152,6 +153,12 @@
} }
} }
/* Fix for Radix ScrollArea viewport horizontal overflow */
[data-slot="scroll-area-viewport"] > div {
display: block !important;
width: 100% !important;
}
/* Custom scrollbar */ /* Custom scrollbar */
@layer utilities { @layer utilities {
.scrollbar-thin { .scrollbar-thin {

View File

@@ -96,7 +96,6 @@ export default function RootLayout({
className={`${plusJakarta.variable} ${jetbrainsMono.variable} antialiased`} className={`${plusJakarta.variable} ${jetbrainsMono.variable} antialiased`}
> >
<DiagramProvider>{children}</DiagramProvider> <DiagramProvider>{children}</DiagramProvider>
<Analytics /> <Analytics />
</body> </body>
{process.env.NEXT_PUBLIC_GA_ID && ( {process.env.NEXT_PUBLIC_GA_ID && (

View File

@@ -1,14 +1,28 @@
"use client"; "use client";
import React, { useState, useEffect } from "react"; import React, { useState, useEffect, useRef } from "react";
import { DrawIoEmbed } from "react-drawio"; import { DrawIoEmbed } from "react-drawio";
import ChatPanel from "@/components/chat-panel"; import ChatPanel from "@/components/chat-panel";
import { useDiagram } from "@/contexts/diagram-context"; import { useDiagram } from "@/contexts/diagram-context";
import { Monitor } from "lucide-react"; import { Monitor } from "lucide-react";
import {
ResizablePanelGroup,
ResizablePanel,
ResizableHandle,
} from "@/components/ui/resizable";
import type { ImperativePanelHandle } from "react-resizable-panels";
export default function Home() { export default function Home() {
const { drawioRef, handleDiagramExport } = useDiagram(); const { drawioRef, handleDiagramExport } = useDiagram();
const [isMobile, setIsMobile] = useState(false); const [isMobile, setIsMobile] = useState(false);
const [isChatVisible, setIsChatVisible] = useState(true); const [isChatVisible, setIsChatVisible] = useState(true);
const [drawioUi, setDrawioUi] = useState<"min" | "sketch">(() => {
if (typeof window !== "undefined") {
const saved = localStorage.getItem("drawio-theme");
if (saved === "min" || saved === "sketch") return saved;
}
return "min";
});
const chatPanelRef = useRef<ImperativePanelHandle>(null);
useEffect(() => { useEffect(() => {
const checkMobile = () => { const checkMobile = () => {
@@ -20,20 +34,46 @@ export default function Home() {
return () => window.removeEventListener("resize", checkMobile); return () => window.removeEventListener("resize", checkMobile);
}, []); }, []);
const toggleChatPanel = () => {
const panel = chatPanelRef.current;
if (panel) {
if (panel.isCollapsed()) {
panel.expand();
setIsChatVisible(true);
} else {
panel.collapse();
setIsChatVisible(false);
}
}
};
useEffect(() => { useEffect(() => {
const handleKeyDown = (event: KeyboardEvent) => { const handleKeyDown = (event: KeyboardEvent) => {
if ((event.ctrlKey || event.metaKey) && event.key === 'b') { if ((event.ctrlKey || event.metaKey) && event.key === "b") {
event.preventDefault(); event.preventDefault();
setIsChatVisible((prev) => !prev); toggleChatPanel();
} }
}; };
window.addEventListener('keydown', handleKeyDown); window.addEventListener("keydown", handleKeyDown);
return () => window.removeEventListener('keydown', handleKeyDown); return () => window.removeEventListener("keydown", handleKeyDown);
}, []);
// Show confirmation dialog when user tries to leave the page
// This helps prevent accidental navigation from browser back gestures
useEffect(() => {
const handleBeforeUnload = (event: BeforeUnloadEvent) => {
event.preventDefault();
return "";
};
window.addEventListener("beforeunload", handleBeforeUnload);
return () =>
window.removeEventListener("beforeunload", handleBeforeUnload);
}, []); }, []);
return ( return (
<div className="flex h-screen bg-background relative overflow-hidden"> <div className="h-screen bg-background relative overflow-hidden">
{/* Mobile warning overlay */} {/* Mobile warning overlay */}
{isMobile && ( {isMobile && (
<div className="absolute inset-0 z-50 flex items-center justify-center bg-background"> <div className="absolute inset-0 z-50 flex items-center justify-center bg-background">
@@ -45,41 +85,62 @@ export default function Home() {
Desktop Required Desktop Required
</h1> </h1>
<p className="text-sm text-muted-foreground leading-relaxed"> <p className="text-sm text-muted-foreground leading-relaxed">
This application works best on desktop or laptop devices. Please open it on a larger screen for the full experience. This application works best on desktop or laptop
devices. Please open it on a larger screen for the
full experience.
</p> </p>
</div> </div>
</div> </div>
)} )}
{/* Draw.io Canvas */} <ResizablePanelGroup direction="horizontal" className="h-full">
<div {/* Draw.io Canvas */}
className={`${isChatVisible ? 'w-2/3' : 'w-full'} h-full relative transition-all duration-300 ease-out`} <ResizablePanel defaultSize={67} minSize={30}>
> <div className="h-full relative p-2">
<div className="absolute inset-2 rounded-xl overflow-hidden shadow-soft-lg border border-border/30 bg-white"> <div className="h-full rounded-xl overflow-hidden shadow-soft-lg border border-border/30 bg-white">
<DrawIoEmbed <DrawIoEmbed
ref={drawioRef} key={drawioUi}
onExport={handleDiagramExport} ref={drawioRef}
urlParameters={{ onExport={handleDiagramExport}
spin: true, urlParameters={{
libraries: false, ui: drawioUi,
saveAndExit: false, spin: true,
noExitBtn: true, libraries: false,
}} saveAndExit: false,
/> noExitBtn: true,
</div> }}
</div> />
</div>
</div>
</ResizablePanel>
{/* Chat Panel */} <ResizableHandle withHandle />
<div
className={`${isChatVisible ? 'w-1/3' : 'w-12'} h-full transition-all duration-300 ease-out`} {/* Chat Panel */}
> <ResizablePanel
<div className="h-full py-2 pr-2"> ref={chatPanelRef}
<ChatPanel defaultSize={33}
isVisible={isChatVisible} minSize={15}
onToggleVisibility={() => setIsChatVisible(!isChatVisible)} maxSize={50}
/> collapsible
</div> collapsedSize={3}
</div> onCollapse={() => setIsChatVisible(false)}
onExpand={() => setIsChatVisible(true)}
>
<div className="h-full py-2 pr-2">
<ChatPanel
isVisible={isChatVisible}
onToggleVisibility={toggleChatPanel}
drawioUi={drawioUi}
onToggleDrawioUi={() => {
const newTheme = drawioUi === "min" ? "sketch" : "min";
localStorage.setItem("drawio-theme", newTheme);
setDrawioUi(newTheme);
}}
/>
</div>
</ResizablePanel>
</ResizablePanelGroup>
</div> </div>
); );
} }

View File

@@ -27,7 +27,7 @@ export function ButtonWithTooltip({
<TooltipTrigger asChild> <TooltipTrigger asChild>
<Button {...buttonProps}>{children}</Button> <Button {...buttonProps}>{children}</Button>
</TooltipTrigger> </TooltipTrigger>
<TooltipContent>{tooltipContent}</TooltipContent> <TooltipContent className="max-w-xs text-wrap">{tooltipContent}</TooltipContent>
</Tooltip> </Tooltip>
</TooltipProvider> </TooltipProvider>
); );

View File

@@ -90,7 +90,10 @@ export default function ExamplePanel({
icon={<Zap className="w-4 h-4 text-primary" />} icon={<Zap className="w-4 h-4 text-primary" />}
title="Animated Diagram" title="Animated Diagram"
description="Draw a transformer architecture with animated connectors" description="Draw a transformer architecture with animated connectors"
onClick={() => setInput("Give me a **animated connector** diagram of transformer's architecture")} onClick={() => {
setInput("Give me a **animated connector** diagram of transformer's architecture")
setFiles([])
}}
/> />
<ExampleCard <ExampleCard
@@ -111,7 +114,10 @@ export default function ExamplePanel({
icon={<Palette className="w-4 h-4 text-primary" />} icon={<Palette className="w-4 h-4 text-primary" />}
title="Creative Drawing" title="Creative Drawing"
description="Draw something fun and creative" description="Draw something fun and creative"
onClick={() => setInput("Draw a cat for me")} onClick={() => {
setInput("Draw a cat for me")
setFiles([])
}}
/> />
</div> </div>

View File

@@ -5,6 +5,14 @@ import { Button } from "@/components/ui/button";
import { Textarea } from "@/components/ui/textarea"; import { Textarea } from "@/components/ui/textarea";
import { ResetWarningModal } from "@/components/reset-warning-modal"; import { ResetWarningModal } from "@/components/reset-warning-modal";
import { SaveDialog } from "@/components/save-dialog"; import { SaveDialog } from "@/components/save-dialog";
import {
Dialog,
DialogContent,
DialogDescription,
DialogFooter,
DialogHeader,
DialogTitle,
} from "@/components/ui/dialog";
import { import {
Loader2, Loader2,
Send, Send,
@@ -12,12 +20,80 @@ import {
Image as ImageIcon, Image as ImageIcon,
History, History,
Download, Download,
Paperclip, PenTool,
LayoutGrid,
} from "lucide-react"; } from "lucide-react";
import { toast } from "sonner";
import { ButtonWithTooltip } from "@/components/button-with-tooltip"; import { ButtonWithTooltip } from "@/components/button-with-tooltip";
import { FilePreviewList } from "./file-preview-list"; import { FilePreviewList } from "./file-preview-list";
import { useDiagram } from "@/contexts/diagram-context"; import { useDiagram } from "@/contexts/diagram-context";
import { HistoryDialog } from "@/components/history-dialog"; import { HistoryDialog } from "@/components/history-dialog";
import { ErrorToast } from "@/components/error-toast";
const MAX_FILE_SIZE = 2 * 1024 * 1024; // 2MB
const MAX_FILES = 5;
function formatFileSize(bytes: number): string {
const mb = bytes / 1024 / 1024;
if (mb < 0.01) return `${(bytes / 1024).toFixed(0)}KB`;
return `${mb.toFixed(2)}MB`;
}
function showErrorToast(message: React.ReactNode) {
toast.custom(
(t) => <ErrorToast message={message} onDismiss={() => toast.dismiss(t)} />,
{ duration: 5000 }
);
}
interface ValidationResult {
validFiles: File[];
errors: string[];
}
function validateFiles(newFiles: File[], existingCount: number): ValidationResult {
const errors: string[] = [];
const validFiles: File[] = [];
const availableSlots = MAX_FILES - existingCount;
if (availableSlots <= 0) {
errors.push(`Maximum ${MAX_FILES} files allowed`);
return { validFiles, errors };
}
for (const file of newFiles) {
if (validFiles.length >= availableSlots) {
errors.push(`Only ${availableSlots} more file(s) allowed`);
break;
}
if (file.size > MAX_FILE_SIZE) {
errors.push(`"${file.name}" is ${formatFileSize(file.size)} (exceeds 2MB)`);
} else {
validFiles.push(file);
}
}
return { validFiles, errors };
}
function showValidationErrors(errors: string[]) {
if (errors.length === 0) return;
if (errors.length === 1) {
showErrorToast(<span className="text-muted-foreground">{errors[0]}</span>);
} else {
showErrorToast(
<div className="flex flex-col gap-1">
<span className="font-medium">{errors.length} files rejected:</span>
<ul className="text-muted-foreground text-xs list-disc list-inside">
{errors.slice(0, 3).map((err, i) => <li key={i}>{err}</li>)}
{errors.length > 3 && <li>...and {errors.length - 3} more</li>}
</ul>
</div>
);
}
}
interface ChatInputProps { interface ChatInputProps {
input: string; input: string;
@@ -29,6 +105,10 @@ interface ChatInputProps {
onFileChange?: (files: File[]) => void; onFileChange?: (files: File[]) => void;
showHistory?: boolean; showHistory?: boolean;
onToggleHistory?: (show: boolean) => void; onToggleHistory?: (show: boolean) => void;
sessionId?: string;
error?: Error | null;
drawioUi?: "min" | "sketch";
onToggleDrawioUi?: () => void;
} }
export function ChatInput({ export function ChatInput({
@@ -41,6 +121,10 @@ export function ChatInput({
onFileChange = () => {}, onFileChange = () => {},
showHistory = false, showHistory = false,
onToggleHistory = () => {}, onToggleHistory = () => {},
sessionId,
error = null,
drawioUi = "min",
onToggleDrawioUi = () => {},
}: ChatInputProps) { }: ChatInputProps) {
const { diagramHistory, saveDiagramToFile } = useDiagram(); const { diagramHistory, saveDiagramToFile } = useDiagram();
const textareaRef = useRef<HTMLTextAreaElement>(null); const textareaRef = useRef<HTMLTextAreaElement>(null);
@@ -48,12 +132,11 @@ export function ChatInput({
const [isDragging, setIsDragging] = useState(false); const [isDragging, setIsDragging] = useState(false);
const [showClearDialog, setShowClearDialog] = useState(false); const [showClearDialog, setShowClearDialog] = useState(false);
const [showSaveDialog, setShowSaveDialog] = useState(false); const [showSaveDialog, setShowSaveDialog] = useState(false);
const [showThemeWarning, setShowThemeWarning] = useState(false);
const isDisabled = status === "streaming" || status === "submitted"; // Allow retry when there's an error (even if status is still "streaming" or "submitted")
const isDisabled =
useEffect(() => { (status === "streaming" || status === "submitted") && !error;
console.log('[ChatInput] Status changed to:', status, '| Input disabled:', isDisabled);
}, [status, isDisabled]);
const adjustTextareaHeight = useCallback(() => { const adjustTextareaHeight = useCallback(() => {
const textarea = textareaRef.current; const textarea = textareaRef.current;
@@ -86,23 +169,20 @@ export function ChatInput({
); );
if (imageItems.length > 0) { if (imageItems.length > 0) {
const imageFiles = await Promise.all( const imageFiles = (await Promise.all(
imageItems.map(async (item) => { imageItems.map(async (item, index) => {
const file = item.getAsFile(); const file = item.getAsFile();
if (!file) return null; if (!file) return null;
return new File( return new File(
[file], [file],
`pasted-image-${Date.now()}.${file.type.split("/")[1]}`, `pasted-image-${Date.now()}-${index}.${file.type.split("/")[1]}`,
{ { type: file.type }
type: file.type,
}
); );
}) })
); )).filter((f): f is File => f !== null);
const validFiles = imageFiles.filter( const { validFiles, errors } = validateFiles(imageFiles, files.length);
(file): file is File => file !== null showValidationErrors(errors);
);
if (validFiles.length > 0) { if (validFiles.length > 0) {
onFileChange([...files, ...validFiles]); onFileChange([...files, ...validFiles]);
} }
@@ -111,7 +191,15 @@ export function ChatInput({
const handleFileChange = (e: React.ChangeEvent<HTMLInputElement>) => { const handleFileChange = (e: React.ChangeEvent<HTMLInputElement>) => {
const newFiles = Array.from(e.target.files || []); const newFiles = Array.from(e.target.files || []);
onFileChange([...files, ...newFiles]); const { validFiles, errors } = validateFiles(newFiles, files.length);
showValidationErrors(errors);
if (validFiles.length > 0) {
onFileChange([...files, ...validFiles]);
}
// Reset input so same file can be selected again
if (fileInputRef.current) {
fileInputRef.current.value = "";
}
}; };
const handleRemoveFile = (fileToRemove: File) => { const handleRemoveFile = (fileToRemove: File) => {
@@ -145,13 +233,14 @@ export function ChatInput({
if (isDisabled) return; if (isDisabled) return;
const droppedFiles = e.dataTransfer.files; const droppedFiles = e.dataTransfer.files;
const imageFiles = Array.from(droppedFiles).filter((file) => const imageFiles = Array.from(droppedFiles).filter((file) =>
file.type.startsWith("image/") file.type.startsWith("image/")
); );
if (imageFiles.length > 0) { const { validFiles, errors } = validateFiles(imageFiles, files.length);
onFileChange([...files, ...imageFiles]); showValidationErrors(errors);
if (validFiles.length > 0) {
onFileChange([...files, ...validFiles]);
} }
}; };
@@ -175,7 +264,10 @@ export function ChatInput({
{/* File previews */} {/* File previews */}
{files.length > 0 && ( {files.length > 0 && (
<div className="mb-3"> <div className="mb-3">
<FilePreviewList files={files} onRemoveFile={handleRemoveFile} /> <FilePreviewList
files={files}
onRemoveFile={handleRemoveFile}
/>
</div> </div>
)} )}
@@ -218,6 +310,50 @@ export function ChatInput({
showHistory={showHistory} showHistory={showHistory}
onToggleHistory={onToggleHistory} onToggleHistory={onToggleHistory}
/> />
<ButtonWithTooltip
type="button"
variant="ghost"
size="sm"
onClick={() => setShowThemeWarning(true)}
tooltipContent={drawioUi === "min" ? "Switch to Sketch theme" : "Switch to Minimal theme"}
className="h-8 w-8 p-0 text-muted-foreground hover:text-foreground"
>
{drawioUi === "min" ? (
<PenTool className="h-4 w-4" />
) : (
<LayoutGrid className="h-4 w-4" />
)}
</ButtonWithTooltip>
<Dialog open={showThemeWarning} onOpenChange={setShowThemeWarning}>
<DialogContent>
<DialogHeader>
<DialogTitle>Switch Theme?</DialogTitle>
<DialogDescription>
Switching themes will reload the diagram editor and clear any unsaved changes.
</DialogDescription>
</DialogHeader>
<DialogFooter>
<Button
variant="outline"
onClick={() => setShowThemeWarning(false)}
>
Cancel
</Button>
<Button
variant="destructive"
onClick={() => {
onClearChat();
onToggleDrawioUi();
setShowThemeWarning(false);
}}
>
Switch Theme
</Button>
</DialogFooter>
</DialogContent>
</Dialog>
</div> </div>
{/* Right actions */} {/* Right actions */}
@@ -249,8 +385,12 @@ export function ChatInput({
<SaveDialog <SaveDialog
open={showSaveDialog} open={showSaveDialog}
onOpenChange={setShowSaveDialog} onOpenChange={setShowSaveDialog}
onSave={(filename, format) => saveDiagramToFile(filename, format)} onSave={(filename, format) =>
defaultFilename={`diagram-${new Date().toISOString().slice(0, 10)}`} saveDiagramToFile(filename, format, sessionId)
}
defaultFilename={`diagram-${new Date()
.toISOString()
.slice(0, 10)}`}
/> />
<ButtonWithTooltip <ButtonWithTooltip
@@ -282,7 +422,9 @@ export function ChatInput({
disabled={isDisabled || !input.trim()} disabled={isDisabled || !input.trim()}
size="sm" size="sm"
className="h-8 px-4 rounded-xl font-medium shadow-sm" className="h-8 px-4 rounded-xl font-medium shadow-sm"
aria-label={isDisabled ? "Sending..." : "Send message"} aria-label={
isDisabled ? "Sending..." : "Send message"
}
> >
{isDisabled ? ( {isDisabled ? (
<Loader2 className="h-4 w-4 animate-spin" /> <Loader2 className="h-4 w-4 animate-spin" />
@@ -296,7 +438,6 @@ export function ChatInput({
</div> </div>
</div> </div>
</div> </div>
</form> </form>
); );
} }

View File

@@ -2,11 +2,12 @@
import { useRef, useEffect, useState, useCallback } from "react"; import { useRef, useEffect, useState, useCallback } from "react";
import Image from "next/image"; import Image from "next/image";
import ReactMarkdown from "react-markdown";
import { ScrollArea } from "@/components/ui/scroll-area"; import { ScrollArea } from "@/components/ui/scroll-area";
import ExamplePanel from "./chat-example-panel"; import ExamplePanel from "./chat-example-panel";
import { UIMessage } from "ai"; import { UIMessage } from "ai";
import { convertToLegalXml, replaceNodes, validateMxCellStructure } from "@/lib/utils"; import { convertToLegalXml, replaceNodes, validateMxCellStructure } from "@/lib/utils";
import { Copy, Check, X, ChevronDown, ChevronUp, Cpu, Minus, Plus, RotateCcw, Pencil } from "lucide-react"; import { Copy, Check, X, ChevronDown, ChevronUp, Cpu, Minus, Plus, ThumbsUp, ThumbsDown, RotateCcw, Pencil } from "lucide-react";
import { CodeBlock } from "./code-block"; import { CodeBlock } from "./code-block";
interface EditPair { interface EditPair {
@@ -64,18 +65,18 @@ const getMessageTextContent = (message: UIMessage): string => {
interface ChatMessageDisplayProps { interface ChatMessageDisplayProps {
messages: UIMessage[]; messages: UIMessage[];
error?: Error | null;
setInput: (input: string) => void; setInput: (input: string) => void;
setFiles: (files: File[]) => void; setFiles: (files: File[]) => void;
sessionId?: string;
onRegenerate?: (messageIndex: number) => void; onRegenerate?: (messageIndex: number) => void;
onEditMessage?: (messageIndex: number, newText: string) => void; onEditMessage?: (messageIndex: number, newText: string) => void;
} }
export function ChatMessageDisplay({ export function ChatMessageDisplay({
messages, messages,
error,
setInput, setInput,
setFiles, setFiles,
sessionId,
onRegenerate, onRegenerate,
onEditMessage, onEditMessage,
}: ChatMessageDisplayProps) { }: ChatMessageDisplayProps) {
@@ -88,6 +89,7 @@ export function ChatMessageDisplay({
); );
const [copiedMessageId, setCopiedMessageId] = useState<string | null>(null); const [copiedMessageId, setCopiedMessageId] = useState<string | null>(null);
const [copyFailedMessageId, setCopyFailedMessageId] = useState<string | null>(null); const [copyFailedMessageId, setCopyFailedMessageId] = useState<string | null>(null);
const [feedback, setFeedback] = useState<Record<string, "good" | "bad">>({});
const [editingMessageId, setEditingMessageId] = useState<string | null>(null); const [editingMessageId, setEditingMessageId] = useState<string | null>(null);
const [editText, setEditText] = useState<string>(""); const [editText, setEditText] = useState<string>("");
@@ -103,6 +105,34 @@ export function ChatMessageDisplay({
} }
}; };
const submitFeedback = async (messageId: string, value: "good" | "bad") => {
// Toggle off if already selected
if (feedback[messageId] === value) {
setFeedback((prev) => {
const next = { ...prev };
delete next[messageId];
return next;
});
return;
}
setFeedback((prev) => ({ ...prev, [messageId]: value }));
try {
await fetch("/api/log-feedback", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
messageId,
feedback: value,
sessionId,
}),
});
} catch (error) {
console.warn("Failed to log feedback:", error);
}
};
const handleDisplayChart = useCallback( const handleDisplayChart = useCallback(
(xml: string) => { (xml: string) => {
const currentXml = xml || ""; const currentXml = xml || "";
@@ -115,7 +145,7 @@ export function ChatMessageDisplay({
previousXML.current = convertedXml; previousXML.current = convertedXml;
onDisplayChart(replacedXML); onDisplayChart(replacedXML);
} else { } else {
console.error("[ChatMessageDisplay] XML validation failed:", validationError); console.log("[ChatMessageDisplay] XML validation failed:", validationError);
} }
} }
}, },
@@ -252,11 +282,11 @@ export function ChatMessageDisplay({
}; };
return ( return (
<ScrollArea className="h-full px-4 scrollbar-thin"> <ScrollArea className="h-full w-full scrollbar-thin">
{messages.length === 0 ? ( {messages.length === 0 ? (
<ExamplePanel setInput={setInput} setFiles={setFiles} /> <ExamplePanel setInput={setInput} setFiles={setFiles} />
) : ( ) : (
<div className="py-4 space-y-4"> <div className="py-4 px-4 space-y-4">
{messages.map((message, messageIndex) => { {messages.map((message, messageIndex) => {
const userMessageText = message.role === "user" ? getMessageTextContent(message) : ""; const userMessageText = message.role === "user" ? getMessageTextContent(message) : "";
const isLastAssistantMessage = message.role === "assistant" && ( const isLastAssistantMessage = message.role === "assistant" && (
@@ -271,7 +301,7 @@ export function ChatMessageDisplay({
return ( return (
<div <div
key={message.id} key={message.id}
className={`flex ${message.role === "user" ? "justify-end" : "justify-start"} animate-message-in`} className={`flex w-full ${message.role === "user" ? "justify-end" : "justify-start"} animate-message-in`}
style={{ animationDelay: `${messageIndex * 50}ms` }} style={{ animationDelay: `${messageIndex * 50}ms` }}
> >
{message.role === "user" && userMessageText && !isEditing && ( {message.role === "user" && userMessageText && !isEditing && (
@@ -304,7 +334,7 @@ export function ChatMessageDisplay({
</button> </button>
</div> </div>
)} )}
<div className="max-w-[85%]"> <div className="max-w-[85%] min-w-0">
{/* Edit mode for user messages */} {/* Edit mode for user messages */}
{isEditing && message.role === "user" ? ( {isEditing && message.role === "user" ? (
<div className="flex flex-col gap-2"> <div className="flex flex-col gap-2">
@@ -360,6 +390,8 @@ export function ChatMessageDisplay({
className={`px-4 py-3 text-sm leading-relaxed ${ className={`px-4 py-3 text-sm leading-relaxed ${
message.role === "user" message.role === "user"
? "bg-primary text-primary-foreground rounded-2xl rounded-br-md shadow-sm" ? "bg-primary text-primary-foreground rounded-2xl rounded-br-md shadow-sm"
: message.role === "system"
? "bg-destructive/10 text-destructive border border-destructive/20 rounded-2xl rounded-bl-md"
: "bg-muted/60 text-foreground rounded-2xl rounded-bl-md" : "bg-muted/60 text-foreground rounded-2xl rounded-bl-md"
} ${message.role === "user" && isLastUserMessage && onEditMessage ? "cursor-pointer hover:opacity-90 transition-opacity" : ""}`} } ${message.role === "user" && isLastUserMessage && onEditMessage ? "cursor-pointer hover:opacity-90 transition-opacity" : ""}`}
onClick={() => { onClick={() => {
@@ -374,8 +406,8 @@ export function ChatMessageDisplay({
switch (part.type) { switch (part.type) {
case "text": case "text":
return ( return (
<div key={index} className="whitespace-pre-wrap break-words"> <div key={index} className="prose prose-sm dark:prose-invert max-w-none break-words [&>*:first-child]:mt-0 [&>*:last-child]:mb-0">
{part.text} <ReactMarkdown>{part.text}</ReactMarkdown>
</div> </div>
); );
case "file": case "file":
@@ -436,6 +468,32 @@ export function ChatMessageDisplay({
<RotateCcw className="h-3.5 w-3.5" /> <RotateCcw className="h-3.5 w-3.5" />
</button> </button>
)} )}
{/* Divider */}
<div className="w-px h-4 bg-border mx-1" />
{/* Thumbs up */}
<button
onClick={() => submitFeedback(message.id, "good")}
className={`p-1.5 rounded-lg transition-colors ${
feedback[message.id] === "good"
? "text-green-600 bg-green-100"
: "text-muted-foreground/60 hover:text-green-600 hover:bg-green-50"
}`}
title="Good response"
>
<ThumbsUp className="h-3.5 w-3.5" />
</button>
{/* Thumbs down */}
<button
onClick={() => submitFeedback(message.id, "bad")}
className={`p-1.5 rounded-lg transition-colors ${
feedback[message.id] === "bad"
? "text-red-600 bg-red-100"
: "text-muted-foreground/60 hover:text-red-600 hover:bg-red-50"
}`}
title="Bad response"
>
<ThumbsDown className="h-3.5 w-3.5" />
</button>
</div> </div>
)} )}
</div> </div>
@@ -444,11 +502,6 @@ export function ChatMessageDisplay({
})} })}
</div> </div>
)} )}
{error && (
<div className="mx-4 mb-4 p-4 rounded-xl bg-red-50 border border-red-200 text-red-600 text-sm">
<span className="font-medium">Error:</span> {error.message}
</div>
)}
<div ref={messagesEndRef} /> <div ref={messagesEndRef} />
</ScrollArea> </ScrollArea>
); );

View File

@@ -4,7 +4,12 @@ import type React from "react";
import { useRef, useEffect, useState } from "react"; import { useRef, useEffect, useState } from "react";
import { flushSync } from "react-dom"; import { flushSync } from "react-dom";
import { FaGithub } from "react-icons/fa"; import { FaGithub } from "react-icons/fa";
import { PanelRightClose, PanelRightOpen } from "lucide-react"; import {
PanelRightClose,
PanelRightOpen,
Settings,
CheckCircle,
} from "lucide-react";
import Link from "next/link"; import Link from "next/link";
import Image from "next/image"; import Image from "next/image";
@@ -15,15 +20,24 @@ import { ChatMessageDisplay } from "./chat-message-display";
import { useDiagram } from "@/contexts/diagram-context"; import { useDiagram } from "@/contexts/diagram-context";
import { replaceNodes, formatXML, validateMxCellStructure } from "@/lib/utils"; import { replaceNodes, formatXML, validateMxCellStructure } from "@/lib/utils";
import { ButtonWithTooltip } from "@/components/button-with-tooltip"; import { ButtonWithTooltip } from "@/components/button-with-tooltip";
import { Toaster } from "sonner";
import {
SettingsDialog,
STORAGE_ACCESS_CODE_KEY,
} from "@/components/settings-dialog";
interface ChatPanelProps { interface ChatPanelProps {
isVisible: boolean; isVisible: boolean;
onToggleVisibility: () => void; onToggleVisibility: () => void;
drawioUi: "min" | "sketch";
onToggleDrawioUi: () => void;
} }
export default function ChatPanel({ export default function ChatPanel({
isVisible, isVisible,
onToggleVisibility, onToggleVisibility,
drawioUi,
onToggleDrawioUi,
}: ChatPanelProps) { }: ChatPanelProps) {
const { const {
loadDiagram: onDisplayChart, loadDiagram: onDisplayChart,
@@ -60,8 +74,23 @@ export default function ChatPanel({
const [files, setFiles] = useState<File[]>([]); const [files, setFiles] = useState<File[]>([]);
const [showHistory, setShowHistory] = useState(false); const [showHistory, setShowHistory] = useState(false);
const [showSettingsDialog, setShowSettingsDialog] = useState(false);
const [accessCodeRequired, setAccessCodeRequired] = useState(false);
const [input, setInput] = useState(""); const [input, setInput] = useState("");
// Check if access code is required on mount
useEffect(() => {
fetch("/api/config")
.then((res) => res.json())
.then((data) => setAccessCodeRequired(data.accessCodeRequired))
.catch(() => setAccessCodeRequired(false));
}, []);
// Generate a unique session ID for Langfuse tracing
const [sessionId, setSessionId] = useState(
() => `session-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`
);
// Store XML snapshots for each user message (keyed by message index) // Store XML snapshots for each user message (keyed by message index)
const xmlSnapshotsRef = useRef<Map<number, string>>(new Map()); const xmlSnapshotsRef = useRef<Map<number, string>>(new Map());
@@ -109,12 +138,20 @@ export default function ChatPanel({
const cachedXML = chartXMLRef.current; const cachedXML = chartXMLRef.current;
if (cachedXML) { if (cachedXML) {
currentXml = cachedXML; currentXml = cachedXML;
console.log("[edit_diagram] Using cached chartXML, length:", currentXml.length); console.log(
"[edit_diagram] Using cached chartXML, length:",
currentXml.length
);
} else { } else {
// Fallback to export only if no cached XML // Fallback to export only if no cached XML
console.log("[edit_diagram] No cached XML, fetching from DrawIO..."); console.log(
"[edit_diagram] No cached XML, fetching from DrawIO..."
);
currentXml = await onFetchChart(false); currentXml = await onFetchChart(false);
console.log("[edit_diagram] Got XML from export, length:", currentXml.length); console.log(
"[edit_diagram] Got XML from export, length:",
currentXml.length
);
} }
const { replaceXMLParts } = await import("@/lib/utils"); const { replaceXMLParts } = await import("@/lib/utils");
@@ -152,7 +189,27 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
} }
}, },
onError: (error) => { onError: (error) => {
console.error("Chat error:", error); // Silence access code error in console since it's handled by UI
if (!error.message.includes("Invalid or missing access code")) {
console.error("Chat error:", error);
}
// Add system message for error so it can be cleared
setMessages((currentMessages) => {
const errorMessage = {
id: `error-${Date.now()}`,
role: "system" as const,
content: error.message,
parts: [{ type: "text" as const, text: error.message }],
};
return [...currentMessages, errorMessage];
});
if (error.message.includes("Invalid or missing access code")) {
// Show settings button and open dialog to help user fix it
setAccessCodeRequired(true);
setShowSettingsDialog(true);
}
}, },
}); });
@@ -164,7 +221,6 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
} }
}, [messages]); }, [messages]);
const onFormSubmit = async (e: React.FormEvent<HTMLFormElement>) => { const onFormSubmit = async (e: React.FormEvent<HTMLFormElement>) => {
e.preventDefault(); e.preventDefault();
const isProcessing = status === "streaming" || status === "submitted"; const isProcessing = status === "streaming" || status === "submitted";
@@ -200,11 +256,17 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
const messageIndex = messages.length; const messageIndex = messages.length;
xmlSnapshotsRef.current.set(messageIndex, chartXml); xmlSnapshotsRef.current.set(messageIndex, chartXml);
const accessCode =
localStorage.getItem(STORAGE_ACCESS_CODE_KEY) || "";
sendMessage( sendMessage(
{ parts }, { parts },
{ {
body: { body: {
xml: chartXml, xml: chartXml,
sessionId,
},
headers: {
"x-access-code": accessCode,
}, },
} }
); );
@@ -233,7 +295,10 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
// Find the user message before this assistant message // Find the user message before this assistant message
let userMessageIndex = messageIndex - 1; let userMessageIndex = messageIndex - 1;
while (userMessageIndex >= 0 && messages[userMessageIndex].role !== "user") { while (
userMessageIndex >= 0 &&
messages[userMessageIndex].role !== "user"
) {
userMessageIndex--; userMessageIndex--;
} }
@@ -249,7 +314,10 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
// Get the saved XML snapshot for this user message // Get the saved XML snapshot for this user message
const savedXml = xmlSnapshotsRef.current.get(userMessageIndex); const savedXml = xmlSnapshotsRef.current.get(userMessageIndex);
if (!savedXml) { if (!savedXml) {
console.error("No saved XML snapshot for message index:", userMessageIndex); console.error(
"No saved XML snapshot for message index:",
userMessageIndex
);
return; return;
} }
@@ -279,6 +347,7 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
{ {
body: { body: {
xml: savedXml, xml: savedXml,
sessionId,
}, },
} }
); );
@@ -294,7 +363,10 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
// Get the saved XML snapshot for this user message // Get the saved XML snapshot for this user message
const savedXml = xmlSnapshotsRef.current.get(messageIndex); const savedXml = xmlSnapshotsRef.current.get(messageIndex);
if (!savedXml) { if (!savedXml) {
console.error("No saved XML snapshot for message index:", messageIndex); console.error(
"No saved XML snapshot for message index:",
messageIndex
);
return; return;
} }
@@ -332,6 +404,7 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
{ {
body: { body: {
xml: savedXml, xml: savedXml,
sessionId,
}, },
} }
); );
@@ -365,7 +438,12 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
// Full view // Full view
return ( return (
<div className="h-full flex flex-col bg-card shadow-soft animate-slide-in-right rounded-xl border border-border/30"> <div className="h-full flex flex-col bg-card shadow-soft animate-slide-in-right rounded-xl border border-border/30 relative">
<Toaster
position="bottom-center"
richColors
style={{ position: "absolute" }}
/>
{/* Header */} {/* Header */}
<header className="px-5 py-4 border-b border-border/50"> <header className="px-5 py-4 border-b border-border/50">
<div className="flex items-center justify-between"> <div className="flex items-center justify-between">
@@ -388,6 +466,14 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
> >
About About
</Link> </Link>
<ButtonWithTooltip
tooltipContent="Recent generation failures were caused by our AI provider's infrastructure issue, not the app code. After extensive debugging, I've switched providers and observed 6 hours of stability. If issues persist, please report on GitHub."
variant="ghost"
size="icon"
className="h-6 w-6 text-green-500 hover:text-green-600"
>
<CheckCircle className="h-4 w-4" />
</ButtonWithTooltip>
</div> </div>
<div className="flex items-center gap-1"> <div className="flex items-center gap-1">
<a <a
@@ -398,6 +484,17 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
> >
<FaGithub className="w-5 h-5" /> <FaGithub className="w-5 h-5" />
</a> </a>
{accessCodeRequired && (
<ButtonWithTooltip
tooltipContent="Settings"
variant="ghost"
size="icon"
onClick={() => setShowSettingsDialog(true)}
className="hover:bg-accent"
>
<Settings className="h-5 w-5 text-muted-foreground" />
</ButtonWithTooltip>
)}
<ButtonWithTooltip <ButtonWithTooltip
tooltipContent="Hide chat panel (Ctrl+B)" tooltipContent="Hide chat panel (Ctrl+B)"
variant="ghost" variant="ghost"
@@ -412,12 +509,12 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
</header> </header>
{/* Messages */} {/* Messages */}
<main className="flex-1 overflow-hidden"> <main className="flex-1 w-full overflow-hidden">
<ChatMessageDisplay <ChatMessageDisplay
messages={messages} messages={messages}
error={error}
setInput={setInput} setInput={setInput}
setFiles={handleFileChange} setFiles={handleFileChange}
sessionId={sessionId}
onRegenerate={handleRegenerate} onRegenerate={handleRegenerate}
onEditMessage={handleEditMessage} onEditMessage={handleEditMessage}
/> />
@@ -433,14 +530,28 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
onClearChat={() => { onClearChat={() => {
setMessages([]); setMessages([]);
clearDiagram(); clearDiagram();
setSessionId(
`session-${Date.now()}-${Math.random()
.toString(36)
.slice(2, 9)}`
);
xmlSnapshotsRef.current.clear(); xmlSnapshotsRef.current.clear();
}} }}
files={files} files={files}
onFileChange={handleFileChange} onFileChange={handleFileChange}
showHistory={showHistory} showHistory={showHistory}
onToggleHistory={setShowHistory} onToggleHistory={setShowHistory}
sessionId={sessionId}
error={error}
drawioUi={drawioUi}
onToggleDrawioUi={onToggleDrawioUi}
/> />
</footer> </footer>
<SettingsDialog
open={showSettingsDialog}
onOpenChange={setShowSettingsDialog}
/>
</div> </div>
); );
} }

View File

@@ -0,0 +1,39 @@
"use client";
import React from "react";
interface ErrorToastProps {
message: React.ReactNode;
onDismiss: () => void;
}
export function ErrorToast({ message, onDismiss }: ErrorToastProps) {
const handleKeyDown = (e: React.KeyboardEvent) => {
if (e.key === "Enter" || e.key === " " || e.key === "Escape") {
e.preventDefault();
onDismiss();
}
};
return (
<div
role="alert"
aria-live="polite"
tabIndex={0}
onClick={onDismiss}
onKeyDown={handleKeyDown}
className="flex items-center gap-3 bg-card border border-border/50 px-4 py-3 rounded-xl shadow-sm cursor-pointer hover:bg-muted/50 focus:outline-none focus:ring-2 focus:ring-primary/50 transition-colors"
>
<div className="flex items-center justify-center w-8 h-8 rounded-full bg-destructive/10 flex-shrink-0">
<svg className="w-4 h-4 text-destructive" viewBox="0 0 20 20" fill="currentColor" aria-hidden="true">
<path
fillRule="evenodd"
d="M4.293 4.293a1 1 0 011.414 0L10 8.586l4.293-4.293a1 1 0 111.414 1.414L11.414 10l4.293 4.293a1 1 0 01-1.414 1.414L10 11.414l-4.293 4.293a1 1 0 01-1.414-1.414L8.586 10 4.293 5.707a1 1 0 010-1.414z"
clipRule="evenodd"
/>
</svg>
</div>
<span className="text-sm text-foreground">{message}</span>
</div>
);
}

View File

@@ -0,0 +1,83 @@
"use client";
import { useState, useEffect } from "react";
import { Button } from "@/components/ui/button";
import { Input } from "@/components/ui/input";
import {
Dialog,
DialogContent,
DialogHeader,
DialogTitle,
DialogFooter,
DialogDescription,
} from "@/components/ui/dialog";
interface SettingsDialogProps {
open: boolean;
onOpenChange: (open: boolean) => void;
}
export const STORAGE_ACCESS_CODE_KEY = "next-ai-draw-io-access-code";
export function SettingsDialog({
open,
onOpenChange,
}: SettingsDialogProps) {
const [accessCode, setAccessCode] = useState("");
useEffect(() => {
if (open) {
const storedCode = localStorage.getItem(STORAGE_ACCESS_CODE_KEY) || "";
setAccessCode(storedCode);
}
}, [open]);
const handleSave = () => {
localStorage.setItem(STORAGE_ACCESS_CODE_KEY, accessCode.trim());
onOpenChange(false);
};
const handleKeyDown = (e: React.KeyboardEvent) => {
if (e.key === "Enter") {
e.preventDefault();
handleSave();
}
};
return (
<Dialog open={open} onOpenChange={onOpenChange}>
<DialogContent className="sm:max-w-md">
<DialogHeader>
<DialogTitle>Settings</DialogTitle>
<DialogDescription>
Configure your access settings.
</DialogDescription>
</DialogHeader>
<div className="space-y-4 py-2">
<div className="space-y-2">
<label className="text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70">
Access Code
</label>
<Input
type="password"
value={accessCode}
onChange={(e) => setAccessCode(e.target.value)}
onKeyDown={handleKeyDown}
placeholder="Enter access code"
autoComplete="off"
/>
<p className="text-[0.8rem] text-muted-foreground">
Required if the server has enabled access control.
</p>
</div>
</div>
<DialogFooter>
<Button variant="outline" onClick={() => onOpenChange(false)}>
Cancel
</Button>
<Button onClick={handleSave}>Save</Button>
</DialogFooter>
</DialogContent>
</Dialog>
);
}

View File

@@ -0,0 +1,56 @@
"use client"
import * as React from "react"
import { GripVerticalIcon } from "lucide-react"
import * as ResizablePrimitive from "react-resizable-panels"
import { cn } from "@/lib/utils"
function ResizablePanelGroup({
className,
...props
}: React.ComponentProps<typeof ResizablePrimitive.PanelGroup>) {
return (
<ResizablePrimitive.PanelGroup
data-slot="resizable-panel-group"
className={cn(
"flex h-full w-full data-[panel-group-direction=vertical]:flex-col",
className
)}
{...props}
/>
)
}
function ResizablePanel({
...props
}: React.ComponentProps<typeof ResizablePrimitive.Panel>) {
return <ResizablePrimitive.Panel data-slot="resizable-panel" {...props} />
}
function ResizableHandle({
withHandle,
className,
...props
}: React.ComponentProps<typeof ResizablePrimitive.PanelResizeHandle> & {
withHandle?: boolean
}) {
return (
<ResizablePrimitive.PanelResizeHandle
data-slot="resizable-handle"
className={cn(
"bg-border focus-visible:ring-ring relative flex w-px items-center justify-center after:absolute after:inset-y-0 after:left-1/2 after:w-1 after:-translate-x-1/2 focus-visible:ring-1 focus-visible:ring-offset-1 focus-visible:outline-hidden data-[panel-group-direction=vertical]:h-px data-[panel-group-direction=vertical]:w-full data-[panel-group-direction=vertical]:after:left-0 data-[panel-group-direction=vertical]:after:h-1 data-[panel-group-direction=vertical]:after:w-full data-[panel-group-direction=vertical]:after:translate-x-0 data-[panel-group-direction=vertical]:after:-translate-y-1/2 [&[data-panel-group-direction=vertical]>div]:rotate-90",
className
)}
{...props}
>
{withHandle && (
<div className="bg-border z-10 flex h-4 w-3 items-center justify-center rounded-xs border">
<GripVerticalIcon className="size-2.5" />
</div>
)}
</ResizablePrimitive.PanelResizeHandle>
)
}
export { ResizablePanelGroup, ResizablePanel, ResizableHandle }

View File

@@ -18,7 +18,7 @@ function ScrollArea({
> >
<ScrollAreaPrimitive.Viewport <ScrollAreaPrimitive.Viewport
data-slot="scroll-area-viewport" data-slot="scroll-area-viewport"
className="ring-ring/10 dark:ring-ring/20 dark:outline-ring/40 outline-ring/50 size-full rounded-[inherit] transition-[color,box-shadow] focus-visible:ring-4 focus-visible:outline-1" className="ring-ring/10 dark:ring-ring/20 dark:outline-ring/40 outline-ring/50 size-full rounded-[inherit] transition-[color,box-shadow] focus-visible:ring-4 focus-visible:outline-1 !overflow-x-hidden"
> >
{children} {children}
</ScrollAreaPrimitive.Viewport> </ScrollAreaPrimitive.Viewport>

View File

@@ -16,7 +16,7 @@ interface DiagramContextType {
drawioRef: React.Ref<DrawIoEmbedRef | null>; drawioRef: React.Ref<DrawIoEmbedRef | null>;
handleDiagramExport: (data: any) => void; handleDiagramExport: (data: any) => void;
clearDiagram: () => void; clearDiagram: () => void;
saveDiagramToFile: (filename: string, format: ExportFormat) => void; saveDiagramToFile: (filename: string, format: ExportFormat, sessionId?: string) => void;
} }
const DiagramContext = createContext<DiagramContextType | undefined>(undefined); const DiagramContext = createContext<DiagramContextType | undefined>(undefined);
@@ -107,7 +107,7 @@ export function DiagramProvider({ children }: { children: React.ReactNode }) {
setDiagramHistory([]); setDiagramHistory([]);
}; };
const saveDiagramToFile = (filename: string, format: ExportFormat) => { const saveDiagramToFile = (filename: string, format: ExportFormat, sessionId?: string) => {
if (!drawioRef.current) { if (!drawioRef.current) {
console.warn("Draw.io editor not ready"); console.warn("Draw.io editor not ready");
return; return;
@@ -145,6 +145,9 @@ export function DiagramProvider({ children }: { children: React.ReactNode }) {
extension = ".svg"; extension = ".svg";
} }
// Log save event to Langfuse (flags the trace)
logSaveToLangfuse(filename, format, sessionId);
// Handle download // Handle download
let url: string; let url: string;
if (typeof fileContent === "string" && fileContent.startsWith("data:")) { if (typeof fileContent === "string" && fileContent.startsWith("data:")) {
@@ -174,6 +177,19 @@ export function DiagramProvider({ children }: { children: React.ReactNode }) {
drawioRef.current.exportDiagram({ format: drawioFormat }); drawioRef.current.exportDiagram({ format: drawioFormat });
}; };
// Log save event to Langfuse (just flags the trace, doesn't send content)
const logSaveToLangfuse = async (filename: string, format: string, sessionId?: string) => {
try {
await fetch("/api/log-save", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ filename, format, sessionId }),
});
} catch (error) {
console.warn("Failed to log save to Langfuse:", error);
}
};
return ( return (
<DiagramContext.Provider <DiagramContext.Provider
value={{ value={{

141
docs/ai-providers.md Normal file
View File

@@ -0,0 +1,141 @@
# AI Provider Configuration
This guide explains how to configure different AI model providers for next-ai-draw-io.
## Quick Start
1. Copy `.env.example` to `.env.local`
2. Set your API key for your chosen provider
3. Set `AI_MODEL` to your desired model
4. Run `npm run dev`
## Supported Providers
### Google Gemini
```bash
GOOGLE_GENERATIVE_AI_API_KEY=your_api_key
AI_MODEL=gemini-2.0-flash
```
Optional custom endpoint:
```bash
GOOGLE_BASE_URL=https://your-custom-endpoint
```
### OpenAI
```bash
OPENAI_API_KEY=your_api_key
AI_MODEL=gpt-4o
```
Optional custom endpoint (for OpenAI-compatible services):
```bash
OPENAI_BASE_URL=https://your-custom-endpoint/v1
```
### Anthropic
```bash
ANTHROPIC_API_KEY=your_api_key
AI_MODEL=claude-sonnet-4-5-20250514
```
Optional custom endpoint:
```bash
ANTHROPIC_BASE_URL=https://your-custom-endpoint
```
### DeepSeek
```bash
DEEPSEEK_API_KEY=your_api_key
AI_MODEL=deepseek-chat
```
Optional custom endpoint:
```bash
DEEPSEEK_BASE_URL=https://your-custom-endpoint
```
### Azure OpenAI
```bash
AZURE_API_KEY=your_api_key
AI_MODEL=your-deployment-name
```
Optional custom endpoint:
```bash
AZURE_BASE_URL=https://your-resource.openai.azure.com
```
### AWS Bedrock
```bash
AWS_REGION=us-west-2
AWS_ACCESS_KEY_ID=your_access_key_id
AWS_SECRET_ACCESS_KEY=your_secret_access_key
AI_MODEL=anthropic.claude-sonnet-4-5-20250514-v1:0
```
Note: On AWS (Amplify, Lambda, EC2 with IAM role), credentials are automatically obtained from the IAM role.
### OpenRouter
```bash
OPENROUTER_API_KEY=your_api_key
AI_MODEL=anthropic/claude-sonnet-4
```
Optional custom endpoint:
```bash
OPENROUTER_BASE_URL=https://your-custom-endpoint
```
### Ollama (Local)
```bash
AI_PROVIDER=ollama
AI_MODEL=llama3.2
```
Optional custom URL:
```bash
OLLAMA_BASE_URL=http://localhost:11434
```
## Auto-Detection
If you only configure **one** provider's API key, the system will automatically detect and use that provider. No need to set `AI_PROVIDER`.
If you configure **multiple** API keys, you must explicitly set `AI_PROVIDER`:
```bash
AI_PROVIDER=google # or: openai, anthropic, deepseek, azure, bedrock, openrouter, ollama
```
## Model Capability Requirements
This task requires exceptionally strong model capabilities, as it involves generating long-form text with strict formatting constraints (draw.io XML).
**Recommended models**:
- Claude Sonnet 4.5 / Opus 4.5
**Note on Ollama**: While Ollama is supported as a provider, it's generally not practical for this use case unless you're running high-capability models like DeepSeek R1 or Qwen3-235B locally.
## Recommendations
- **Best experience**: Use models with vision support (GPT-4o, Claude, Gemini) for image-to-diagram features
- **Budget-friendly**: DeepSeek offers competitive pricing
- **Privacy**: Use Ollama for fully local, offline operation (requires powerful hardware)
- **Flexibility**: OpenRouter provides access to many models through a single API

View File

@@ -41,3 +41,12 @@ AI_MODEL=global.anthropic.claude-sonnet-4-5-20250929-v1:0
# DeepSeek Configuration # DeepSeek Configuration
# DEEPSEEK_API_KEY=sk-... # DEEPSEEK_API_KEY=sk-...
# DEEPSEEK_BASE_URL=https://api.deepseek.com/v1 # Optional: Custom endpoint # DEEPSEEK_BASE_URL=https://api.deepseek.com/v1 # Optional: Custom endpoint
# Langfuse Observability (Optional)
# Enable LLM tracing and analytics - https://langfuse.com
# LANGFUSE_PUBLIC_KEY=pk-lf-...
# LANGFUSE_SECRET_KEY=sk-lf-...
# LANGFUSE_BASEURL=https://cloud.langfuse.com # EU region, use https://us.cloud.langfuse.com for US
# Access Control (Optional)
# ACCESS_CODE_LIST=your-secret-code,another-code

35
instrumentation.ts Normal file
View File

@@ -0,0 +1,35 @@
import { LangfuseSpanProcessor } from '@langfuse/otel';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
export function register() {
// Skip telemetry if Langfuse env vars are not configured
if (!process.env.LANGFUSE_PUBLIC_KEY || !process.env.LANGFUSE_SECRET_KEY) {
console.warn('[Langfuse] Environment variables not configured - telemetry disabled');
return;
}
const langfuseSpanProcessor = new LangfuseSpanProcessor({
publicKey: process.env.LANGFUSE_PUBLIC_KEY,
secretKey: process.env.LANGFUSE_SECRET_KEY,
baseUrl: process.env.LANGFUSE_BASEURL,
// Filter out Next.js HTTP request spans so AI SDK spans become root traces
shouldExportSpan: ({ otelSpan }) => {
const spanName = otelSpan.name;
// Skip Next.js HTTP infrastructure spans
if (spanName.startsWith('POST /') ||
spanName.startsWith('GET /') ||
spanName.includes('BaseServer') ||
spanName.includes('handleRequest')) {
return false;
}
return true;
},
});
const tracerProvider = new NodeTracerProvider({
spanProcessors: [langfuseSpanProcessor],
});
// Register globally so AI SDK's telemetry also uses this processor
tracerProvider.register();
}

View File

@@ -1,4 +1,5 @@
import { bedrock } from '@ai-sdk/amazon-bedrock'; import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock';
import { fromNodeProviderChain } from '@aws-sdk/credential-providers';
import { openai, createOpenAI } from '@ai-sdk/openai'; import { openai, createOpenAI } from '@ai-sdk/openai';
import { createAnthropic } from '@ai-sdk/anthropic'; import { createAnthropic } from '@ai-sdk/anthropic';
import { google, createGoogleGenerativeAI } from '@ai-sdk/google'; import { google, createGoogleGenerativeAI } from '@ai-sdk/google';
@@ -38,7 +39,7 @@ const ANTHROPIC_BETA_HEADERS = {
// Map of provider to required environment variable // Map of provider to required environment variable
const PROVIDER_ENV_VARS: Record<ProviderName, string | null> = { const PROVIDER_ENV_VARS: Record<ProviderName, string | null> = {
bedrock: 'AWS_ACCESS_KEY_ID', bedrock: null, // AWS SDK auto-uses IAM role on AWS, or env vars locally
openai: 'OPENAI_API_KEY', openai: 'OPENAI_API_KEY',
anthropic: 'ANTHROPIC_API_KEY', anthropic: 'ANTHROPIC_API_KEY',
google: 'GOOGLE_GENERATIVE_AI_API_KEY', google: 'GOOGLE_GENERATIVE_AI_API_KEY',
@@ -159,13 +160,20 @@ export function getAIModel(): ModelConfig {
let headers: Record<string, string> | undefined = undefined; let headers: Record<string, string> | undefined = undefined;
switch (provider) { switch (provider) {
case 'bedrock': case 'bedrock': {
model = bedrock(modelId); // Use credential provider chain for IAM role support (Amplify, Lambda, etc.)
// Falls back to env vars (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) for local dev
const bedrockProvider = createAmazonBedrock({
region: process.env.AWS_REGION || 'us-west-2',
credentialProvider: fromNodeProviderChain(),
});
model = bedrockProvider(modelId);
// Add Anthropic beta options if using Claude models via Bedrock // Add Anthropic beta options if using Claude models via Bedrock
if (modelId.includes('anthropic.claude')) { if (modelId.includes('anthropic.claude')) {
providerOptions = BEDROCK_ANTHROPIC_BETA; providerOptions = BEDROCK_ANTHROPIC_BETA;
} }
break; break;
}
case 'openai': case 'openai':
if (process.env.OPENAI_BASE_URL) { if (process.env.OPENAI_BASE_URL) {

95
lib/langfuse.ts Normal file
View File

@@ -0,0 +1,95 @@
import { observe, updateActiveTrace } from '@langfuse/tracing';
import { LangfuseClient } from '@langfuse/client';
import * as api from '@opentelemetry/api';
// Singleton LangfuseClient instance for direct API calls
let langfuseClient: LangfuseClient | null = null;
export function getLangfuseClient(): LangfuseClient | null {
if (!process.env.LANGFUSE_PUBLIC_KEY || !process.env.LANGFUSE_SECRET_KEY) {
return null;
}
if (!langfuseClient) {
langfuseClient = new LangfuseClient({
publicKey: process.env.LANGFUSE_PUBLIC_KEY,
secretKey: process.env.LANGFUSE_SECRET_KEY,
baseUrl: process.env.LANGFUSE_BASEURL,
});
}
return langfuseClient;
}
// Check if Langfuse is configured
export function isLangfuseEnabled(): boolean {
return !!process.env.LANGFUSE_PUBLIC_KEY;
}
// Update trace with input data at the start of request
export function setTraceInput(params: {
input: string;
sessionId?: string;
userId?: string;
}) {
if (!isLangfuseEnabled()) return;
updateActiveTrace({
name: 'chat',
input: params.input,
sessionId: params.sessionId,
userId: params.userId,
});
}
// Update trace with output and end the span
export function setTraceOutput(output: string, usage?: { promptTokens?: number; completionTokens?: number }) {
if (!isLangfuseEnabled()) return;
updateActiveTrace({ output });
const activeSpan = api.trace.getActiveSpan();
if (activeSpan) {
// Manually set usage attributes since AI SDK Bedrock streaming doesn't provide them
if (usage?.promptTokens) {
activeSpan.setAttribute('ai.usage.promptTokens', usage.promptTokens);
activeSpan.setAttribute('gen_ai.usage.input_tokens', usage.promptTokens);
}
if (usage?.completionTokens) {
activeSpan.setAttribute('ai.usage.completionTokens', usage.completionTokens);
activeSpan.setAttribute('gen_ai.usage.output_tokens', usage.completionTokens);
}
activeSpan.end();
}
}
// Get telemetry config for streamText
export function getTelemetryConfig(params: {
sessionId?: string;
userId?: string;
}) {
if (!isLangfuseEnabled()) return undefined;
return {
isEnabled: true,
// Disable automatic input recording to avoid uploading large base64 images to Langfuse media
// User text input is recorded manually via setTraceInput
recordInputs: false,
recordOutputs: true,
metadata: {
sessionId: params.sessionId,
userId: params.userId,
},
};
}
// Wrap a handler with Langfuse observe
export function wrapWithObserve<T>(
handler: (req: Request) => Promise<T>
): (req: Request) => Promise<T> {
if (!isLangfuseEnabled()) {
return handler;
}
return observe(handler, { name: 'chat', endOnExit: false });
}

2077
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -17,8 +17,13 @@
"@ai-sdk/google": "^2.0.0", "@ai-sdk/google": "^2.0.0",
"@ai-sdk/openai": "^2.0.19", "@ai-sdk/openai": "^2.0.19",
"@ai-sdk/react": "^2.0.22", "@ai-sdk/react": "^2.0.22",
"@aws-sdk/credential-providers": "^3.943.0",
"@langfuse/client": "^4.4.9",
"@langfuse/otel": "^4.4.4",
"@langfuse/tracing": "^4.4.9",
"@next/third-parties": "^16.0.6", "@next/third-parties": "^16.0.6",
"@openrouter/ai-sdk-provider": "^1.2.3", "@openrouter/ai-sdk-provider": "^1.2.3",
"@opentelemetry/sdk-trace-node": "^2.2.0",
"@radix-ui/react-dialog": "^1.1.6", "@radix-ui/react-dialog": "^1.1.6",
"@radix-ui/react-scroll-area": "^1.2.3", "@radix-ui/react-scroll-area": "^1.2.3",
"@radix-ui/react-select": "^2.2.6", "@radix-ui/react-select": "^2.2.6",
@@ -40,13 +45,17 @@
"react-dom": "^19.0.0", "react-dom": "^19.0.0",
"react-drawio": "^1.0.3", "react-drawio": "^1.0.3",
"react-icons": "^5.5.0", "react-icons": "^5.5.0",
"react-markdown": "^10.1.0",
"react-resizable-panels": "^3.0.6",
"remark-gfm": "^4.0.1", "remark-gfm": "^4.0.1",
"sonner": "^2.0.7",
"tailwind-merge": "^3.0.2", "tailwind-merge": "^3.0.2",
"tailwindcss-animate": "^1.0.7", "tailwindcss-animate": "^1.0.7",
"zod": "^4.1.12" "zod": "^4.1.12"
}, },
"devDependencies": { "devDependencies": {
"@tailwindcss/postcss": "^4", "@tailwindcss/postcss": "^4",
"@tailwindcss/typography": "^0.5.19",
"@types/node": "^20", "@types/node": "^20",
"@types/pako": "^2.0.3", "@types/pako": "^2.0.3",
"@types/react": "^19", "@types/react": "^19",

BIN
public/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB