mirror of
https://github.com/DayuanJiang/next-ai-draw-io.git
synced 2026-01-02 22:32:27 +08:00
Compare commits
1 Commits
fix/shorte
...
cloudflare
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0dd7b2383e |
9
.gitignore
vendored
9
.gitignore
vendored
@@ -40,9 +40,10 @@ yarn-error.log*
|
||||
*.tsbuildinfo
|
||||
next-env.d.ts
|
||||
push-via-ec2.sh
|
||||
.claude/
|
||||
.claude/settings.local.json
|
||||
.playwright-mcp/
|
||||
# Cloudflare
|
||||
.dev.vars
|
||||
|
||||
# cloudflare
|
||||
.open-next/
|
||||
.wrangler/
|
||||
.dev.vars
|
||||
.wrangler/
|
||||
@@ -22,10 +22,6 @@ COPY . .
|
||||
# Disable Next.js telemetry during build
|
||||
ENV NEXT_TELEMETRY_DISABLED=1
|
||||
|
||||
# Build-time argument for self-hosted draw.io URL
|
||||
ARG NEXT_PUBLIC_DRAWIO_BASE_URL=https://embed.diagrams.net
|
||||
ENV NEXT_PUBLIC_DRAWIO_BASE_URL=${NEXT_PUBLIC_DRAWIO_BASE_URL}
|
||||
|
||||
# Build Next.js application (standalone mode)
|
||||
RUN npm run build
|
||||
|
||||
|
||||
132
README.md
132
README.md
@@ -4,44 +4,31 @@
|
||||
|
||||
**AI-Powered Diagram Creation Tool - Chat, Draw, Visualize**
|
||||
|
||||
English | [中文](./docs/README_CN.md) | [日本語](./docs/README_JA.md)
|
||||
English | [中文](./README_CN.md) | [日本語](./README_JA.md)
|
||||
|
||||
[](https://next-ai-drawio.jiang.jp/)
|
||||
|
||||
[](https://opensource.org/licenses/Apache-2.0)
|
||||
[](https://nextjs.org/)
|
||||
[](https://react.dev/)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://nextjs.org/)
|
||||
[](https://www.typescriptlang.org/)
|
||||
[](https://github.com/sponsors/DayuanJiang)
|
||||
|
||||
[](https://next-ai-drawio.jiang.jp/)
|
||||
[🚀 Live Demo](https://next-ai-drawio.jiang.jp/)
|
||||
|
||||
</div>
|
||||
|
||||
A Next.js web application that integrates AI capabilities with draw.io diagrams. Create, modify, and enhance diagrams through natural language commands and AI-assisted visualization.
|
||||
|
||||
https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
|
||||
|
||||
## Features
|
||||
|
||||
https://github.com/user-attachments/assets/9d60a3e8-4a1c-4b5e-acbb-26af2d3eabd1
|
||||
- **LLM-Powered Diagram Creation**: Leverage Large Language Models to create and manipulate draw.io diagrams directly through natural language commands
|
||||
- **Image-Based Diagram Replication**: Upload existing diagrams or images and have the AI replicate and enhance them automatically
|
||||
- **Diagram History**: Comprehensive version control that tracks all changes, allowing you to view and restore previous versions of your diagrams before the AI editing.
|
||||
- **Interactive Chat Interface**: Communicate with AI to refine your diagrams in real-time
|
||||
- **AWS Architecture Diagram Support**: Specialized support for generating AWS architecture diagrams
|
||||
- **Animated Connectors**: Create dynamic and animated connectors between diagram elements for better visualization
|
||||
|
||||
|
||||
|
||||
## Table of Contents
|
||||
- [Next AI Draw.io ](#next-ai-drawio-)
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [Examples](#examples)
|
||||
- [Features](#features)
|
||||
- [Getting Started](#getting-started)
|
||||
- [Try it Online](#try-it-online)
|
||||
- [Run with Docker (Recommended)](#run-with-docker-recommended)
|
||||
- [Installation](#installation)
|
||||
- [Deployment](#deployment)
|
||||
- [Multi-Provider Support](#multi-provider-support)
|
||||
- [How It Works](#how-it-works)
|
||||
- [Project Structure](#project-structure)
|
||||
- [Support \& Contact](#support--contact)
|
||||
- [Star History](#star-history)
|
||||
|
||||
## Examples
|
||||
## **Examples**
|
||||
|
||||
Here are some example prompts and their generated diagrams:
|
||||
|
||||
@@ -81,29 +68,38 @@ Here are some example prompts and their generated diagrams:
|
||||
</table>
|
||||
</div>
|
||||
|
||||
## Features
|
||||
## How It Works
|
||||
|
||||
- **LLM-Powered Diagram Creation**: Leverage Large Language Models to create and manipulate draw.io diagrams directly through natural language commands
|
||||
- **Image-Based Diagram Replication**: Upload existing diagrams or images and have the AI replicate and enhance them automatically
|
||||
- **PDF & Text File Upload**: Upload PDF documents and text files to extract content and generate diagrams from existing documents
|
||||
- **AI Reasoning Display**: View the AI's thinking process for supported models (OpenAI o1/o3, Gemini, Claude, etc.)
|
||||
- **Diagram History**: Comprehensive version control that tracks all changes, allowing you to view and restore previous versions of your diagrams before the AI editing.
|
||||
- **Interactive Chat Interface**: Communicate with AI to refine your diagrams in real-time
|
||||
- **Cloud Architecture Diagram Support**: Specialized support for generating cloud architecture diagrams (AWS, GCP, Azure)
|
||||
- **Animated Connectors**: Create dynamic and animated connectors between diagram elements for better visualization
|
||||
The application uses the following technologies:
|
||||
|
||||
- **Next.js**: For the frontend framework and routing
|
||||
- **Vercel AI SDK** (`ai` + `@ai-sdk/*`): For streaming AI responses and multi-provider support
|
||||
- **react-drawio**: For diagram representation and manipulation
|
||||
|
||||
Diagrams are represented as XML that can be rendered in draw.io. The AI processes your commands and generates or modifies this XML accordingly.
|
||||
|
||||
## Multi-Provider Support
|
||||
|
||||
- AWS Bedrock (default)
|
||||
- OpenAI
|
||||
- Anthropic
|
||||
- Google AI
|
||||
- Azure OpenAI
|
||||
- Ollama
|
||||
- OpenRouter
|
||||
- DeepSeek
|
||||
- SiliconFlow
|
||||
|
||||
All providers except AWS Bedrock and OpenRouter support custom endpoints.
|
||||
|
||||
📖 **[Detailed Provider Configuration Guide](./docs/ai-providers.md)** - See setup instructions for each provider.
|
||||
|
||||
**Model Requirements**: This task requires strong model capabilities for generating long-form text with strict formatting constraints (draw.io XML). Recommended models include Claude Sonnet 4.5, GPT-4o, Gemini 2.0, and DeepSeek V3/R1.
|
||||
|
||||
Note that `claude-sonnet-4-5` has trained on draw.io diagrams with AWS logos, so if you want to create AWS architecture diagrams, this is the best choice.
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Try it Online
|
||||
|
||||
No installation needed! Try the app directly on our demo site:
|
||||
|
||||
[](https://next-ai-drawio.jiang.jp/)
|
||||
|
||||
> Note: Due to high traffic, the demo site currently uses minimax-m2. For best results, we recommend self-hosting with Claude Sonnet 4.5 or Claude Opus 4.5.
|
||||
|
||||
> **Bring Your Own API Key**: You can use your own API key to bypass usage limits on the demo site. Click the Settings icon in the chat panel to configure your provider and API key. Your key is stored locally in your browser and is never stored on the server.
|
||||
|
||||
### Run with Docker (Recommended)
|
||||
|
||||
If you just want to run it locally, the best way is to use Docker.
|
||||
@@ -120,7 +116,7 @@ docker run -d -p 3000:3000 \
|
||||
ghcr.io/dayuanjiang/next-ai-draw-io:latest
|
||||
```
|
||||
|
||||
Or use an env file:
|
||||
Or use an env file (create one from `env.example`):
|
||||
|
||||
```bash
|
||||
cp env.example .env
|
||||
@@ -132,8 +128,6 @@ Open [http://localhost:3000](http://localhost:3000) in your browser.
|
||||
|
||||
Replace the environment variables with your preferred AI provider configuration. See [Multi-Provider Support](#multi-provider-support) for available options.
|
||||
|
||||
> **Offline Deployment:** If `embed.diagrams.net` is blocked, see [Offline Deployment](./docs/offline-deployment.md) for configuration options.
|
||||
|
||||
### Installation
|
||||
|
||||
1. Clone the repository:
|
||||
@@ -188,38 +182,6 @@ Or you can deploy by this button.
|
||||
|
||||
Be sure to **set the environment variables** in the Vercel dashboard as you did in your local `.env.local` file.
|
||||
|
||||
|
||||
## Multi-Provider Support
|
||||
|
||||
- AWS Bedrock (default)
|
||||
- OpenAI
|
||||
- Anthropic
|
||||
- Google AI
|
||||
- Azure OpenAI
|
||||
- Ollama
|
||||
- OpenRouter
|
||||
- DeepSeek
|
||||
- SiliconFlow
|
||||
|
||||
All providers except AWS Bedrock and OpenRouter support custom endpoints.
|
||||
|
||||
📖 **[Detailed Provider Configuration Guide](./docs/ai-providers.md)** - See setup instructions for each provider.
|
||||
|
||||
**Model Requirements**: This task requires strong model capabilities for generating long-form text with strict formatting constraints (draw.io XML). Recommended models include Claude Sonnet 4.5, GPT-5.1, Gemini 3 Pro, and DeepSeek V3.2/R1.
|
||||
|
||||
Note that `claude` series has trained on draw.io diagrams with cloud architecture logos like AWS, Azue, GCP. So if you want to create cloud architecture diagrams, this is the best choice.
|
||||
|
||||
|
||||
## How It Works
|
||||
|
||||
The application uses the following technologies:
|
||||
|
||||
- **Next.js**: For the frontend framework and routing
|
||||
- **Vercel AI SDK** (`ai` + `@ai-sdk/*`): For streaming AI responses and multi-provider support
|
||||
- **react-drawio**: For diagram representation and manipulation
|
||||
|
||||
Diagrams are represented as XML that can be rendered in draw.io. The AI processes your commands and generates or modifies this XML accordingly.
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
@@ -239,6 +201,14 @@ lib/ # Utility functions and helpers
|
||||
public/ # Static assets including example images
|
||||
```
|
||||
|
||||
## TODOs
|
||||
|
||||
- [x] Allow the LLM to modify the XML instead of generating it from scratch everytime.
|
||||
- [x] Improve the smoothness of shape streaming updates.
|
||||
- [x] Add multiple AI provider support (OpenAI, Anthropic, Google, Azure, Ollama)
|
||||
- [x] Solve the bug that generation will fail for session that longer than 60s.
|
||||
- [ ] Add API config on the UI.
|
||||
|
||||
## Support & Contact
|
||||
|
||||
If you find this project useful, please consider [sponsoring](https://github.com/sponsors/DayuanJiang) to help me host the live demo site!
|
||||
|
||||
@@ -4,16 +4,14 @@
|
||||
|
||||
**AI驱动的图表创建工具 - 对话、绘制、可视化**
|
||||
|
||||
[English](../README.md) | 中文 | [日本語](./README_JA.md)
|
||||
[English](./README.md) | 中文 | [日本語](./README_JA.md)
|
||||
|
||||
[](https://next-ai-drawio.jiang.jp/)
|
||||
|
||||
[](https://opensource.org/licenses/Apache-2.0)
|
||||
[](https://nextjs.org/)
|
||||
[](https://react.dev/)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://nextjs.org/)
|
||||
[](https://www.typescriptlang.org/)
|
||||
[](https://github.com/sponsors/DayuanJiang)
|
||||
|
||||
[](https://next-ai-drawio.jiang.jp/)
|
||||
[🚀 在线演示](https://next-ai-drawio.jiang.jp/)
|
||||
|
||||
</div>
|
||||
|
||||
@@ -21,23 +19,16 @@
|
||||
|
||||
https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
|
||||
|
||||
## 目录
|
||||
- [Next AI Draw.io](#next-ai-drawio)
|
||||
- [目录](#目录)
|
||||
- [示例](#示例)
|
||||
- [功能特性](#功能特性)
|
||||
- [快速开始](#快速开始)
|
||||
- [在线试用](#在线试用)
|
||||
- [使用Docker运行(推荐)](#使用docker运行推荐)
|
||||
- [安装](#安装)
|
||||
- [部署](#部署)
|
||||
- [多提供商支持](#多提供商支持)
|
||||
- [工作原理](#工作原理)
|
||||
- [项目结构](#项目结构)
|
||||
- [支持与联系](#支持与联系)
|
||||
- [Star历史](#star历史)
|
||||
## 功能特性
|
||||
|
||||
## 示例
|
||||
- **LLM驱动的图表创建**:利用大语言模型通过自然语言命令直接创建和操作draw.io图表
|
||||
- **基于图像的图表复制**:上传现有图表或图像,让AI自动复制和增强
|
||||
- **图表历史记录**:全面的版本控制,跟踪所有更改,允许您查看和恢复AI编辑前的图表版本
|
||||
- **交互式聊天界面**:与AI实时对话来完善您的图表
|
||||
- **AWS架构图支持**:专门支持生成AWS架构图
|
||||
- **动画连接器**:在图表元素之间创建动态动画连接器,实现更好的可视化效果
|
||||
|
||||
## **示例**
|
||||
|
||||
以下是一些示例提示词及其生成的图表:
|
||||
|
||||
@@ -47,59 +38,68 @@ https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
|
||||
<td colspan="2" valign="top" align="center">
|
||||
<strong>动画Transformer连接器</strong><br />
|
||||
<p><strong>提示词:</strong> 给我一个带有**动画连接器**的Transformer架构图。</p>
|
||||
<img src="../public/animated_connectors.svg" alt="带动画连接器的Transformer架构" width="480" />
|
||||
<img src="./public/animated_connectors.svg" alt="带动画连接器的Transformer架构" width="480" />
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
<strong>GCP架构图</strong><br />
|
||||
<p><strong>提示词:</strong> 使用**GCP图标**生成一个GCP架构图。在这个图中,用户连接到托管在实例上的前端。</p>
|
||||
<img src="../public/gcp_demo.svg" alt="GCP架构图" width="480" />
|
||||
<img src="./public/gcp_demo.svg" alt="GCP架构图" width="480" />
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
<strong>AWS架构图</strong><br />
|
||||
<p><strong>提示词:</strong> 使用**AWS图标**生成一个AWS架构图。在这个图中,用户连接到托管在实例上的前端。</p>
|
||||
<img src="../public/aws_demo.svg" alt="AWS架构图" width="480" />
|
||||
<img src="./public/aws_demo.svg" alt="AWS架构图" width="480" />
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
<strong>Azure架构图</strong><br />
|
||||
<p><strong>提示词:</strong> 使用**Azure图标**生成一个Azure架构图。在这个图中,用户连接到托管在实例上的前端。</p>
|
||||
<img src="../public/azure_demo.svg" alt="Azure架构图" width="480" />
|
||||
<img src="./public/azure_demo.svg" alt="Azure架构图" width="480" />
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
<strong>猫咪素描</strong><br />
|
||||
<p><strong>提示词:</strong> 给我画一只可爱的猫。</p>
|
||||
<img src="../public/cat_demo.svg" alt="猫咪绘图" width="240" />
|
||||
<img src="./public/cat_demo.svg" alt="猫咪绘图" width="240" />
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
## 功能特性
|
||||
## 工作原理
|
||||
|
||||
- **LLM驱动的图表创建**:利用大语言模型通过自然语言命令直接创建和操作draw.io图表
|
||||
- **基于图像的图表复制**:上传现有图表或图像,让AI自动复制和增强
|
||||
- **PDF和文本文件上传**:上传PDF文档和文本文件,提取内容并从现有文档生成图表
|
||||
- **AI推理过程显示**:查看支持模型的AI思考过程(OpenAI o1/o3、Gemini、Claude等)
|
||||
- **图表历史记录**:全面的版本控制,跟踪所有更改,允许您查看和恢复AI编辑前的图表版本
|
||||
- **交互式聊天界面**:与AI实时对话来完善您的图表
|
||||
- **云架构图支持**:专门支持生成云架构图(AWS、GCP、Azure)
|
||||
- **动画连接器**:在图表元素之间创建动态动画连接器,实现更好的可视化效果
|
||||
本应用使用以下技术:
|
||||
|
||||
- **Next.js**:用于前端框架和路由
|
||||
- **Vercel AI SDK**(`ai` + `@ai-sdk/*`):用于流式AI响应和多提供商支持
|
||||
- **react-drawio**:用于图表表示和操作
|
||||
|
||||
图表以XML格式表示,可在draw.io中渲染。AI处理您的命令并相应地生成或修改此XML。
|
||||
|
||||
## 多提供商支持
|
||||
|
||||
- AWS Bedrock(默认)
|
||||
- OpenAI
|
||||
- Anthropic
|
||||
- Google AI
|
||||
- Azure OpenAI
|
||||
- Ollama
|
||||
- OpenRouter
|
||||
- DeepSeek
|
||||
- SiliconFlow
|
||||
|
||||
除AWS Bedrock和OpenRouter外,所有提供商都支持自定义端点。
|
||||
|
||||
📖 **[详细的提供商配置指南](./docs/ai-providers.md)** - 查看各提供商的设置说明。
|
||||
|
||||
**模型要求**:此任务需要强大的模型能力,因为它涉及生成具有严格格式约束的长文本(draw.io XML)。推荐使用Claude Sonnet 4.5、GPT-4o、Gemini 2.0和DeepSeek V3/R1。
|
||||
|
||||
注意:`claude-sonnet-4-5` 已在带有AWS标志的draw.io图表上进行训练,因此如果您想创建AWS架构图,这是最佳选择。
|
||||
|
||||
## 快速开始
|
||||
|
||||
### 在线试用
|
||||
|
||||
无需安装!直接在我们的演示站点试用:
|
||||
|
||||
[](https://next-ai-drawio.jiang.jp/)
|
||||
|
||||
> 注意:由于访问量较大,演示站点目前使用 minimax-m2 模型。如需获得最佳效果,建议使用 Claude Sonnet 4.5 或 Claude Opus 4.5 自行部署。
|
||||
|
||||
> **使用自己的 API Key**:您可以使用自己的 API Key 来绕过演示站点的用量限制。点击聊天面板中的设置图标即可配置您的 Provider 和 API Key。您的 Key 仅保存在浏览器本地,不会被存储在服务器上。
|
||||
|
||||
### 使用Docker运行(推荐)
|
||||
|
||||
如果您只想在本地运行,最好的方式是使用Docker。
|
||||
@@ -116,20 +116,10 @@ docker run -d -p 3000:3000 \
|
||||
ghcr.io/dayuanjiang/next-ai-draw-io:latest
|
||||
```
|
||||
|
||||
或者使用 env 文件:
|
||||
|
||||
```bash
|
||||
cp env.example .env
|
||||
# 编辑 .env 填写您的配置
|
||||
docker run -d -p 3000:3000 --env-file .env ghcr.io/dayuanjiang/next-ai-draw-io:latest
|
||||
```
|
||||
|
||||
在浏览器中打开 [http://localhost:3000](http://localhost:3000)。
|
||||
|
||||
请根据您首选的AI提供商配置替换环境变量。可用选项请参阅[多提供商支持](#多提供商支持)。
|
||||
|
||||
> **离线部署:** 如果 `embed.diagrams.net` 被屏蔽,请参阅 [离线部署指南](./offline-deployment.md) 了解配置选项。
|
||||
|
||||
### 安装
|
||||
|
||||
1. 克隆仓库:
|
||||
@@ -143,6 +133,8 @@ cd next-ai-draw-io
|
||||
|
||||
```bash
|
||||
npm install
|
||||
# 或
|
||||
yarn install
|
||||
```
|
||||
|
||||
3. 配置您的AI提供商:
|
||||
@@ -163,7 +155,7 @@ cp env.example .env.local
|
||||
|
||||
> 警告:如果不填写 `ACCESS_CODE_LIST`,则任何人都可以直接使用你部署后的网站,可能会导致你的 token 被急速消耗完毕,建议填写此选项。
|
||||
|
||||
详细设置说明请参阅[提供商配置指南](./ai-providers.md)。
|
||||
详细设置说明请参阅[提供商配置指南](./docs/ai-providers.md)。
|
||||
|
||||
4. 运行开发服务器:
|
||||
|
||||
@@ -184,38 +176,6 @@ npm run dev
|
||||
|
||||
请确保在Vercel控制台中**设置环境变量**,就像您在本地 `.env.local` 文件中所做的那样。
|
||||
|
||||
|
||||
## 多提供商支持
|
||||
|
||||
- AWS Bedrock(默认)
|
||||
- OpenAI
|
||||
- Anthropic
|
||||
- Google AI
|
||||
- Azure OpenAI
|
||||
- Ollama
|
||||
- OpenRouter
|
||||
- DeepSeek
|
||||
- SiliconFlow
|
||||
|
||||
除AWS Bedrock和OpenRouter外,所有提供商都支持自定义端点。
|
||||
|
||||
📖 **[详细的提供商配置指南](./ai-providers.md)** - 查看各提供商的设置说明。
|
||||
|
||||
**模型要求**:此任务需要强大的模型能力,因为它涉及生成具有严格格式约束的长文本(draw.io XML)。推荐使用Claude Sonnet 4.5、GPT-4o、Gemini 2.0和DeepSeek V3/R1。
|
||||
|
||||
注意:`claude-sonnet-4-5` 已在带有AWS标志的draw.io图表上进行训练,因此如果您想创建AWS架构图,这是最佳选择。
|
||||
|
||||
|
||||
## 工作原理
|
||||
|
||||
本应用使用以下技术:
|
||||
|
||||
- **Next.js**:用于前端框架和路由
|
||||
- **Vercel AI SDK**(`ai` + `@ai-sdk/*`):用于流式AI响应和多提供商支持
|
||||
- **react-drawio**:用于图表表示和操作
|
||||
|
||||
图表以XML格式表示,可在draw.io中渲染。AI处理您的命令并相应地生成或修改此XML。
|
||||
|
||||
## 项目结构
|
||||
|
||||
```
|
||||
@@ -235,6 +195,14 @@ lib/ # 工具函数和辅助程序
|
||||
public/ # 静态资源包括示例图片
|
||||
```
|
||||
|
||||
## 待办事项
|
||||
|
||||
- [x] 允许LLM修改XML而不是每次从头生成
|
||||
- [x] 提高形状流式更新的流畅度
|
||||
- [x] 添加多AI提供商支持(OpenAI, Anthropic, Google, Azure, Ollama)
|
||||
- [x] 解决超过60秒的会话生成失败的bug
|
||||
- [ ] 在UI上添加API配置
|
||||
|
||||
## 支持与联系
|
||||
|
||||
如果您觉得这个项目有用,请考虑[赞助](https://github.com/sponsors/DayuanJiang)来帮助我托管在线演示站点!
|
||||
@@ -4,16 +4,14 @@
|
||||
|
||||
**AI搭載のダイアグラム作成ツール - チャット、描画、可視化**
|
||||
|
||||
[English](../README.md) | [中文](./README_CN.md) | 日本語
|
||||
[English](./README.md) | [中文](./README_CN.md) | 日本語
|
||||
|
||||
[](https://next-ai-drawio.jiang.jp/)
|
||||
|
||||
[](https://opensource.org/licenses/Apache-2.0)
|
||||
[](https://nextjs.org/)
|
||||
[](https://react.dev/)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://nextjs.org/)
|
||||
[](https://www.typescriptlang.org/)
|
||||
[](https://github.com/sponsors/DayuanJiang)
|
||||
|
||||
[](https://next-ai-drawio.jiang.jp/)
|
||||
[🚀 ライブデモ](https://next-ai-drawio.jiang.jp/)
|
||||
|
||||
</div>
|
||||
|
||||
@@ -21,23 +19,16 @@ AI機能とdraw.ioダイアグラムを統合したNext.jsウェブアプリケ
|
||||
|
||||
https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
|
||||
|
||||
## 目次
|
||||
- [Next AI Draw.io](#next-ai-drawio)
|
||||
- [目次](#目次)
|
||||
- [例](#例)
|
||||
- [機能](#機能)
|
||||
- [はじめに](#はじめに)
|
||||
- [オンラインで試す](#オンラインで試す)
|
||||
- [Dockerで実行(推奨)](#dockerで実行推奨)
|
||||
- [インストール](#インストール)
|
||||
- [デプロイ](#デプロイ)
|
||||
- [マルチプロバイダーサポート](#マルチプロバイダーサポート)
|
||||
- [仕組み](#仕組み)
|
||||
- [プロジェクト構造](#プロジェクト構造)
|
||||
- [サポート&お問い合わせ](#サポートお問い合わせ)
|
||||
- [スター履歴](#スター履歴)
|
||||
## 機能
|
||||
|
||||
## 例
|
||||
- **LLM搭載のダイアグラム作成**:大規模言語モデルを活用して、自然言語コマンドで直接draw.ioダイアグラムを作成・操作
|
||||
- **画像ベースのダイアグラム複製**:既存のダイアグラムや画像をアップロードし、AIが自動的に複製・強化
|
||||
- **ダイアグラム履歴**:すべての変更を追跡する包括的なバージョン管理。AI編集前のダイアグラムの以前のバージョンを表示・復元可能
|
||||
- **インタラクティブなチャットインターフェース**:AIとリアルタイムでコミュニケーションしてダイアグラムを改善
|
||||
- **AWSアーキテクチャダイアグラムサポート**:AWSアーキテクチャダイアグラムの生成を専門的にサポート
|
||||
- **アニメーションコネクタ**:より良い可視化のためにダイアグラム要素間に動的でアニメーション化されたコネクタを作成
|
||||
|
||||
## **例**
|
||||
|
||||
以下はいくつかのプロンプト例と生成されたダイアグラムです:
|
||||
|
||||
@@ -47,59 +38,68 @@ https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
|
||||
<td colspan="2" valign="top" align="center">
|
||||
<strong>アニメーションTransformerコネクタ</strong><br />
|
||||
<p><strong>プロンプト:</strong> **アニメーションコネクタ**付きのTransformerアーキテクチャ図を作成してください。</p>
|
||||
<img src="../public/animated_connectors.svg" alt="アニメーションコネクタ付きTransformerアーキテクチャ" width="480" />
|
||||
<img src="./public/animated_connectors.svg" alt="アニメーションコネクタ付きTransformerアーキテクチャ" width="480" />
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
<strong>GCPアーキテクチャ図</strong><br />
|
||||
<p><strong>プロンプト:</strong> **GCPアイコン**を使用してGCPアーキテクチャ図を生成してください。この図では、ユーザーがインスタンス上でホストされているフロントエンドに接続します。</p>
|
||||
<img src="../public/gcp_demo.svg" alt="GCPアーキテクチャ図" width="480" />
|
||||
<img src="./public/gcp_demo.svg" alt="GCPアーキテクチャ図" width="480" />
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
<strong>AWSアーキテクチャ図</strong><br />
|
||||
<p><strong>プロンプト:</strong> **AWSアイコン**を使用してAWSアーキテクチャ図を生成してください。この図では、ユーザーがインスタンス上でホストされているフロントエンドに接続します。</p>
|
||||
<img src="../public/aws_demo.svg" alt="AWSアーキテクチャ図" width="480" />
|
||||
<img src="./public/aws_demo.svg" alt="AWSアーキテクチャ図" width="480" />
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
<strong>Azureアーキテクチャ図</strong><br />
|
||||
<p><strong>プロンプト:</strong> **Azureアイコン**を使用してAzureアーキテクチャ図を生成してください。この図では、ユーザーがインスタンス上でホストされているフロントエンドに接続します。</p>
|
||||
<img src="../public/azure_demo.svg" alt="Azureアーキテクチャ図" width="480" />
|
||||
<img src="./public/azure_demo.svg" alt="Azureアーキテクチャ図" width="480" />
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
<strong>猫のスケッチ</strong><br />
|
||||
<p><strong>プロンプト:</strong> かわいい猫を描いてください。</p>
|
||||
<img src="../public/cat_demo.svg" alt="猫の絵" width="240" />
|
||||
<img src="./public/cat_demo.svg" alt="猫の絵" width="240" />
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
## 機能
|
||||
## 仕組み
|
||||
|
||||
- **LLM搭載のダイアグラム作成**:大規模言語モデルを活用して、自然言語コマンドで直接draw.ioダイアグラムを作成・操作
|
||||
- **画像ベースのダイアグラム複製**:既存のダイアグラムや画像をアップロードし、AIが自動的に複製・強化
|
||||
- **PDFとテキストファイルのアップロード**:PDFドキュメントやテキストファイルをアップロードして、既存のドキュメントからコンテンツを抽出し、ダイアグラムを生成
|
||||
- **AI推論プロセス表示**:サポートされているモデル(OpenAI o1/o3、Gemini、Claudeなど)のAIの思考プロセスを表示
|
||||
- **ダイアグラム履歴**:すべての変更を追跡する包括的なバージョン管理。AI編集前のダイアグラムの以前のバージョンを表示・復元可能
|
||||
- **インタラクティブなチャットインターフェース**:AIとリアルタイムでコミュニケーションしてダイアグラムを改善
|
||||
- **クラウドアーキテクチャダイアグラムサポート**:クラウドアーキテクチャダイアグラムの生成を専門的にサポート(AWS、GCP、Azure)
|
||||
- **アニメーションコネクタ**:より良い可視化のためにダイアグラム要素間に動的でアニメーション化されたコネクタを作成
|
||||
本アプリケーションは以下の技術を使用しています:
|
||||
|
||||
- **Next.js**:フロントエンドフレームワークとルーティング
|
||||
- **Vercel AI SDK**(`ai` + `@ai-sdk/*`):ストリーミングAIレスポンスとマルチプロバイダーサポート
|
||||
- **react-drawio**:ダイアグラムの表現と操作
|
||||
|
||||
ダイアグラムはdraw.ioでレンダリングできるXMLとして表現されます。AIがコマンドを処理し、それに応じてこのXMLを生成または変更します。
|
||||
|
||||
## マルチプロバイダーサポート
|
||||
|
||||
- AWS Bedrock(デフォルト)
|
||||
- OpenAI
|
||||
- Anthropic
|
||||
- Google AI
|
||||
- Azure OpenAI
|
||||
- Ollama
|
||||
- OpenRouter
|
||||
- DeepSeek
|
||||
- SiliconFlow
|
||||
|
||||
AWS BedrockとOpenRouter以外のすべてのプロバイダーはカスタムエンドポイントをサポートしています。
|
||||
|
||||
📖 **[詳細なプロバイダー設定ガイド](./docs/ai-providers.md)** - 各プロバイダーの設定手順をご覧ください。
|
||||
|
||||
**モデル要件**:このタスクは厳密なフォーマット制約(draw.io XML)を持つ長文テキスト生成を伴うため、強力なモデル機能が必要です。Claude Sonnet 4.5、GPT-4o、Gemini 2.0、DeepSeek V3/R1を推奨します。
|
||||
|
||||
注:`claude-sonnet-4-5`はAWSロゴ付きのdraw.ioダイアグラムで学習されているため、AWSアーキテクチャダイアグラムを作成したい場合は最適な選択です。
|
||||
|
||||
## はじめに
|
||||
|
||||
### オンラインで試す
|
||||
|
||||
インストール不要!デモサイトで直接お試しください:
|
||||
|
||||
[](https://next-ai-drawio.jiang.jp/)
|
||||
|
||||
> 注意:アクセス数が多いため、デモサイトでは現在 minimax-m2 モデルを使用しています。最高の結果を得るには、Claude Sonnet 4.5 または Claude Opus 4.5 でのセルフホスティングをお勧めします。
|
||||
|
||||
> **自分のAPIキーを使用**:自分のAPIキーを使用することで、デモサイトの利用制限を回避できます。チャットパネルの設定アイコンをクリックして、プロバイダーとAPIキーを設定してください。キーはブラウザのローカルに保存され、サーバーには保存されません。
|
||||
|
||||
### Dockerで実行(推奨)
|
||||
|
||||
ローカルで実行したいだけなら、Dockerを使用するのが最も簡単です。
|
||||
@@ -116,20 +116,10 @@ docker run -d -p 3000:3000 \
|
||||
ghcr.io/dayuanjiang/next-ai-draw-io:latest
|
||||
```
|
||||
|
||||
または env ファイルを使用:
|
||||
|
||||
```bash
|
||||
cp env.example .env
|
||||
# .env を編集して設定を入力
|
||||
docker run -d -p 3000:3000 --env-file .env ghcr.io/dayuanjiang/next-ai-draw-io:latest
|
||||
```
|
||||
|
||||
ブラウザで [http://localhost:3000](http://localhost:3000) を開いてください。
|
||||
|
||||
環境変数はお好みのAIプロバイダー設定に置き換えてください。利用可能なオプションについては[マルチプロバイダーサポート](#マルチプロバイダーサポート)を参照してください。
|
||||
|
||||
> **オフラインデプロイ:** `embed.diagrams.net` がブロックされている場合は、[オフラインデプロイガイド](./offline-deployment.md) で設定オプションをご確認ください。
|
||||
|
||||
### インストール
|
||||
|
||||
1. リポジトリをクローン:
|
||||
@@ -143,6 +133,8 @@ cd next-ai-draw-io
|
||||
|
||||
```bash
|
||||
npm install
|
||||
# または
|
||||
yarn install
|
||||
```
|
||||
|
||||
3. AIプロバイダーを設定:
|
||||
@@ -163,7 +155,7 @@ cp env.example .env.local
|
||||
|
||||
> 警告:`ACCESS_CODE_LIST`を設定しない場合、誰でもデプロイされたサイトに直接アクセスできるため、トークンが急速に消費される可能性があります。このオプションを設定することをお勧めします。
|
||||
|
||||
詳細な設定手順については[プロバイダー設定ガイド](./ai-providers.md)を参照してください。
|
||||
詳細な設定手順については[プロバイダー設定ガイド](./docs/ai-providers.md)を参照してください。
|
||||
|
||||
4. 開発サーバーを起動:
|
||||
|
||||
@@ -184,38 +176,6 @@ Next.jsアプリをデプロイする最も簡単な方法は、Next.jsの作成
|
||||
|
||||
ローカルの`.env.local`ファイルと同様に、Vercelダッシュボードで**環境変数を設定**してください。
|
||||
|
||||
|
||||
## マルチプロバイダーサポート
|
||||
|
||||
- AWS Bedrock(デフォルト)
|
||||
- OpenAI
|
||||
- Anthropic
|
||||
- Google AI
|
||||
- Azure OpenAI
|
||||
- Ollama
|
||||
- OpenRouter
|
||||
- DeepSeek
|
||||
- SiliconFlow
|
||||
|
||||
AWS BedrockとOpenRouter以外のすべてのプロバイダーはカスタムエンドポイントをサポートしています。
|
||||
|
||||
📖 **[詳細なプロバイダー設定ガイド](./ai-providers.md)** - 各プロバイダーの設定手順をご覧ください。
|
||||
|
||||
**モデル要件**:このタスクは厳密なフォーマット制約(draw.io XML)を持つ長文テキスト生成を伴うため、強力なモデル機能が必要です。Claude Sonnet 4.5、GPT-4o、Gemini 2.0、DeepSeek V3/R1を推奨します。
|
||||
|
||||
注:`claude-sonnet-4-5`はAWSロゴ付きのdraw.ioダイアグラムで学習されているため、AWSアーキテクチャダイアグラムを作成したい場合は最適な選択です。
|
||||
|
||||
|
||||
## 仕組み
|
||||
|
||||
本アプリケーションは以下の技術を使用しています:
|
||||
|
||||
- **Next.js**:フロントエンドフレームワークとルーティング
|
||||
- **Vercel AI SDK**(`ai` + `@ai-sdk/*`):ストリーミングAIレスポンスとマルチプロバイダーサポート
|
||||
- **react-drawio**:ダイアグラムの表現と操作
|
||||
|
||||
ダイアグラムはdraw.ioでレンダリングできるXMLとして表現されます。AIがコマンドを処理し、それに応じてこのXMLを生成または変更します。
|
||||
|
||||
## プロジェクト構造
|
||||
|
||||
```
|
||||
@@ -235,6 +195,14 @@ lib/ # ユーティリティ関数とヘルパー
|
||||
public/ # サンプル画像を含む静的アセット
|
||||
```
|
||||
|
||||
## TODO
|
||||
|
||||
- [x] LLMが毎回ゼロから生成する代わりにXMLを修正できるようにする
|
||||
- [x] シェイプストリーミング更新の滑らかさを改善
|
||||
- [x] 複数のAIプロバイダーサポートを追加(OpenAI, Anthropic, Google, Azure, Ollama)
|
||||
- [x] 60秒以上のセッションで生成が失敗するバグを解決
|
||||
- [ ] UIにAPI設定を追加
|
||||
|
||||
## サポート&お問い合わせ
|
||||
|
||||
このプロジェクトが役に立ったら、ライブデモサイトのホスティングを支援するために[スポンサー](https://github.com/sponsors/DayuanJiang)をご検討ください!
|
||||
22
amplify.yml
Normal file
22
amplify.yml
Normal file
@@ -0,0 +1,22 @@
|
||||
version: 1
|
||||
frontend:
|
||||
phases:
|
||||
preBuild:
|
||||
commands:
|
||||
- npm ci --cache .npm --prefer-offline
|
||||
build:
|
||||
commands:
|
||||
# Write env vars to .env.production for Next.js SSR runtime
|
||||
- env | grep -e AI_MODEL >> .env.production
|
||||
- env | grep -e AI_PROVIDER >> .env.production
|
||||
- env | grep -e OPENAI_API_KEY >> .env.production
|
||||
- env | grep -e NEXT_PUBLIC_ >> .env.production
|
||||
- npm run build
|
||||
artifacts:
|
||||
baseDirectory: .next
|
||||
files:
|
||||
- '**/*'
|
||||
cache:
|
||||
paths:
|
||||
- .next/cache/**/*
|
||||
- .npm/**/*
|
||||
@@ -10,18 +10,7 @@ export const metadata: Metadata = {
|
||||
keywords: ["AI图表", "draw.io", "AWS架构", "GCP图表", "Azure图表", "LLM"],
|
||||
}
|
||||
|
||||
function formatNumber(num: number): string {
|
||||
if (num >= 1000) {
|
||||
return `${num / 1000}k`
|
||||
}
|
||||
return num.toString()
|
||||
}
|
||||
|
||||
export default function AboutCN() {
|
||||
const dailyRequestLimit = Number(process.env.DAILY_REQUEST_LIMIT) || 20
|
||||
const dailyTokenLimit = Number(process.env.DAILY_TOKEN_LIMIT) || 500000
|
||||
const tpmLimit = Number(process.env.TPM_LIMIT) || 50000
|
||||
|
||||
return (
|
||||
<div className="min-h-screen bg-gray-50">
|
||||
{/* Navigation */}
|
||||
@@ -96,124 +85,12 @@ export default function AboutCN() {
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="relative mb-8 rounded-2xl bg-gradient-to-br from-amber-50 via-orange-50 to-rose-50 p-[1px] shadow-lg">
|
||||
<div className="absolute inset-0 rounded-2xl bg-gradient-to-br from-amber-400 via-orange-400 to-rose-400 opacity-20" />
|
||||
<div className="relative rounded-2xl bg-white/80 backdrop-blur-sm p-6">
|
||||
{/* Header */}
|
||||
<div className="mb-4">
|
||||
<h3 className="text-lg font-bold text-gray-900 tracking-tight">
|
||||
模型变更与用量限制{" "}
|
||||
<span className="text-sm text-amber-600 font-medium italic font-normal">
|
||||
(或者说:我的钱包顶不住了)
|
||||
</span>
|
||||
</h3>
|
||||
</div>
|
||||
|
||||
{/* Story */}
|
||||
<div className="space-y-3 text-sm text-gray-700 leading-relaxed mb-5">
|
||||
<p>
|
||||
大家对这个项目的热情太高了——看来大家都真的很喜欢画图!但这也带来了一个幸福的烦恼:我们经常触发出上游
|
||||
AI 接口的频率限制
|
||||
(TPS/TPM)。一旦超限,系统就会暂停,导致请求失败。
|
||||
</p>
|
||||
<p>
|
||||
由于使用量过高,我已将模型从 Claude 更换为{" "}
|
||||
<span className="font-semibold text-amber-700">
|
||||
minimax-m2
|
||||
</span>
|
||||
,以降低成本。
|
||||
</p>
|
||||
<p>
|
||||
作为一个
|
||||
<span className="font-semibold text-amber-700">
|
||||
独立开发者
|
||||
</span>
|
||||
,目前的 API
|
||||
费用全是我自己在掏腰包(纯属为爱发电)。为了保证服务能细水长流,同时也为了避免我个人陷入财务危机,我还设置了以下临时用量限制:
|
||||
</p>
|
||||
</div>
|
||||
|
||||
{/* Limits Cards */}
|
||||
<div className="grid grid-cols-2 gap-3 mb-5">
|
||||
<div className="rounded-xl bg-gradient-to-br from-amber-100 to-orange-100 p-4 text-center">
|
||||
<div className="text-xs font-medium text-amber-700 uppercase tracking-wide mb-1">
|
||||
Token 用量
|
||||
</div>
|
||||
<div className="text-lg font-bold text-gray-900">
|
||||
{formatNumber(tpmLimit)}
|
||||
<span className="text-sm font-normal text-gray-600">
|
||||
/分钟
|
||||
</span>
|
||||
</div>
|
||||
<div className="text-lg font-bold text-gray-900">
|
||||
{formatNumber(dailyTokenLimit)}
|
||||
<span className="text-sm font-normal text-gray-600">
|
||||
/天
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
<div className="rounded-xl bg-gradient-to-br from-amber-100 to-orange-100 p-4 text-center">
|
||||
<div className="text-xs font-medium text-amber-700 uppercase tracking-wide mb-1">
|
||||
每日请求数
|
||||
</div>
|
||||
<div className="text-2xl font-bold text-gray-900">
|
||||
{dailyRequestLimit}
|
||||
</div>
|
||||
<div className="text-sm text-gray-600">
|
||||
次
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Divider */}
|
||||
<div className="flex items-center gap-3 my-5">
|
||||
<div className="flex-1 h-px bg-gradient-to-r from-transparent via-amber-300 to-transparent" />
|
||||
</div>
|
||||
|
||||
{/* Bring Your Own Key */}
|
||||
<div className="text-center mb-5">
|
||||
<h4 className="text-base font-bold text-gray-900 mb-2">
|
||||
使用自己的 API Key
|
||||
</h4>
|
||||
<p className="text-sm text-gray-600 mb-2 max-w-md mx-auto">
|
||||
您可以使用自己的 API Key
|
||||
来绕过这些限制。点击聊天面板中的设置图标即可配置您的
|
||||
Provider 和 API Key。
|
||||
</p>
|
||||
<p className="text-xs text-gray-500 max-w-md mx-auto">
|
||||
您的 Key
|
||||
仅保存在浏览器本地,不会被存储在服务器上。
|
||||
</p>
|
||||
</div>
|
||||
|
||||
{/* Divider */}
|
||||
<div className="flex items-center gap-3 mb-5">
|
||||
<div className="flex-1 h-px bg-gradient-to-r from-transparent via-amber-300 to-transparent" />
|
||||
</div>
|
||||
|
||||
{/* Sponsorship CTA */}
|
||||
<div className="text-center">
|
||||
<h4 className="text-base font-bold text-gray-900 mb-2">
|
||||
寻求赞助 (求大佬捞一把)
|
||||
</h4>
|
||||
<p className="text-sm text-gray-600 mb-4 max-w-md mx-auto">
|
||||
要想彻底解除这些限制,扩容后端是唯一的办法。我正在积极寻求
|
||||
AI API 提供商或云平台的赞助。
|
||||
</p>
|
||||
<p className="text-sm text-gray-600 mb-4 max-w-md mx-auto">
|
||||
作为回报(无论是额度支持还是资金支持),我将在
|
||||
GitHub 仓库和 Live Demo
|
||||
网站的显眼位置展示贵公司的 Logo
|
||||
作为平台赞助商。
|
||||
</p>
|
||||
<a
|
||||
href="mailto:me@jiang.jp"
|
||||
className="inline-flex items-center gap-2 px-5 py-2.5 rounded-full bg-gradient-to-r from-amber-500 to-orange-500 text-white font-medium text-sm shadow-md hover:shadow-lg hover:scale-105 transition-all duration-200"
|
||||
>
|
||||
联系我
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
<div className="bg-amber-50 border border-amber-200 rounded-lg p-4 mb-6">
|
||||
<p className="text-amber-800">
|
||||
本应用设计运行于 Claude Opus 4.5
|
||||
以获得最佳性能。但由于流量超出预期,运行顶级模型的成本变得难以承受。为避免服务中断并控制成本,我已将后端切换至
|
||||
Claude Haiku 4.5。
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<p className="text-gray-700">
|
||||
|
||||
@@ -17,18 +17,7 @@ export const metadata: Metadata = {
|
||||
],
|
||||
}
|
||||
|
||||
function formatNumber(num: number): string {
|
||||
if (num >= 1000) {
|
||||
return `${num / 1000}k`
|
||||
}
|
||||
return num.toString()
|
||||
}
|
||||
|
||||
export default function AboutJA() {
|
||||
const dailyRequestLimit = Number(process.env.DAILY_REQUEST_LIMIT) || 20
|
||||
const dailyTokenLimit = Number(process.env.DAILY_TOKEN_LIMIT) || 500000
|
||||
const tpmLimit = Number(process.env.TPM_LIMIT) || 50000
|
||||
|
||||
return (
|
||||
<div className="min-h-screen bg-gray-50">
|
||||
{/* Navigation */}
|
||||
@@ -104,121 +93,13 @@ export default function AboutJA() {
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="relative mb-8 rounded-2xl bg-gradient-to-br from-amber-50 via-orange-50 to-rose-50 p-[1px] shadow-lg">
|
||||
<div className="absolute inset-0 rounded-2xl bg-gradient-to-br from-amber-400 via-orange-400 to-rose-400 opacity-20" />
|
||||
<div className="relative rounded-2xl bg-white/80 backdrop-blur-sm p-6">
|
||||
{/* Header */}
|
||||
<div className="mb-4">
|
||||
<h3 className="text-lg font-bold text-gray-900 tracking-tight">
|
||||
モデル変更と利用制限について{" "}
|
||||
<span className="text-sm text-amber-600 font-medium italic font-normal">
|
||||
(別名:お財布が悲鳴を上げています)
|
||||
</span>
|
||||
</h3>
|
||||
</div>
|
||||
|
||||
{/* Story */}
|
||||
<div className="space-y-3 text-sm text-gray-700 leading-relaxed mb-5">
|
||||
<p>
|
||||
予想以上の反響をいただき、ありがとうございます!皆様にダイアグラム作成を楽しんでいただいているのは嬉しい限りですが、その熱量により
|
||||
AI API のレート制限 (TPS/TPM)
|
||||
に頻繁に引っかかってしまっています。制限に達するとシステムが一時停止し、エラーが発生してしまいます。
|
||||
</p>
|
||||
<p>
|
||||
利用量の増加に伴い、コスト削減のためモデルを
|
||||
Claude から{" "}
|
||||
<span className="font-semibold text-amber-700">
|
||||
minimax-m2
|
||||
</span>{" "}
|
||||
に変更しました。
|
||||
</p>
|
||||
<p>
|
||||
私は現在、
|
||||
<span className="font-semibold text-amber-700">
|
||||
個人開発者
|
||||
</span>
|
||||
として API
|
||||
費用を全額自腹で負担しています。サービスを継続し、かつ私自身が借金を背負わないようにするため(笑)、一時的に以下の利用制限も設けさせていただきました。
|
||||
</p>
|
||||
</div>
|
||||
|
||||
{/* Limits Cards */}
|
||||
<div className="grid grid-cols-2 gap-3 mb-5">
|
||||
<div className="rounded-xl bg-gradient-to-br from-amber-100 to-orange-100 p-4 text-center">
|
||||
<div className="text-xs font-medium text-amber-700 uppercase tracking-wide mb-1">
|
||||
トークン使用量
|
||||
</div>
|
||||
<div className="text-lg font-bold text-gray-900">
|
||||
{formatNumber(tpmLimit)}
|
||||
<span className="text-sm font-normal text-gray-600">
|
||||
/分
|
||||
</span>
|
||||
</div>
|
||||
<div className="text-lg font-bold text-gray-900">
|
||||
{formatNumber(dailyTokenLimit)}
|
||||
<span className="text-sm font-normal text-gray-600">
|
||||
/日
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
<div className="rounded-xl bg-gradient-to-br from-amber-100 to-orange-100 p-4 text-center">
|
||||
<div className="text-xs font-medium text-amber-700 uppercase tracking-wide mb-1">
|
||||
1日のリクエスト数
|
||||
</div>
|
||||
<div className="text-2xl font-bold text-gray-900">
|
||||
{dailyRequestLimit}
|
||||
</div>
|
||||
<div className="text-sm text-gray-600">
|
||||
回
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Divider */}
|
||||
<div className="flex items-center gap-3 my-5">
|
||||
<div className="flex-1 h-px bg-gradient-to-r from-transparent via-amber-300 to-transparent" />
|
||||
</div>
|
||||
|
||||
{/* Bring Your Own Key */}
|
||||
<div className="text-center mb-5">
|
||||
<h4 className="text-base font-bold text-gray-900 mb-2">
|
||||
自分のAPIキーを使用
|
||||
</h4>
|
||||
<p className="text-sm text-gray-600 mb-2 max-w-md mx-auto">
|
||||
自分のAPIキーを使用することで、これらの制限を回避できます。チャットパネルの設定アイコンをクリックして、プロバイダーとAPIキーを設定してください。
|
||||
</p>
|
||||
<p className="text-xs text-gray-500 max-w-md mx-auto">
|
||||
キーはブラウザのローカルに保存され、サーバーには保存されません。
|
||||
</p>
|
||||
</div>
|
||||
|
||||
{/* Divider */}
|
||||
<div className="flex items-center gap-3 mb-5">
|
||||
<div className="flex-1 h-px bg-gradient-to-r from-transparent via-amber-300 to-transparent" />
|
||||
</div>
|
||||
|
||||
{/* Sponsorship CTA */}
|
||||
<div className="text-center">
|
||||
<h4 className="text-base font-bold text-gray-900 mb-2">
|
||||
スポンサー募集
|
||||
</h4>
|
||||
<p className="text-sm text-gray-600 mb-4 max-w-md mx-auto">
|
||||
これらの制限を取り払い、バックエンドをスケールさせるには皆様の支援が必要です。現在、AI
|
||||
API
|
||||
プロバイダー様やクラウドプラットフォーム様からのスポンサー支援を積極的に募集しています。
|
||||
</p>
|
||||
<p className="text-sm text-gray-600 mb-4 max-w-md mx-auto">
|
||||
ご支援(クレジット提供や資金援助)をいただける場合、GitHub
|
||||
リポジトリおよびデモサイトにて、プラットフォームスポンサーとして貴社を大々的にご紹介させていただきます。
|
||||
</p>
|
||||
<a
|
||||
href="mailto:me@jiang.jp"
|
||||
className="inline-flex items-center gap-2 px-5 py-2.5 rounded-full bg-gradient-to-r from-amber-500 to-orange-500 text-white font-medium text-sm shadow-md hover:shadow-lg hover:scale-105 transition-all duration-200"
|
||||
>
|
||||
お問い合わせ
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
<div className="bg-amber-50 border border-amber-200 rounded-lg p-4 mb-6">
|
||||
<p className="text-amber-800">
|
||||
本アプリは最高のパフォーマンスを発揮するため、Claude
|
||||
Opus 4.5
|
||||
で動作するよう設計されています。しかし、予想以上のトラフィックにより、最上位モデルの運用コストが負担となっています。サービスの中断を避け、コストを管理するため、バックエンドを
|
||||
Claude Haiku 4.5 に切り替えました。
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<p className="text-gray-700">
|
||||
|
||||
@@ -17,18 +17,7 @@ export const metadata: Metadata = {
|
||||
],
|
||||
}
|
||||
|
||||
function formatNumber(num: number): string {
|
||||
if (num >= 1000) {
|
||||
return `${num / 1000}k`
|
||||
}
|
||||
return num.toString()
|
||||
}
|
||||
|
||||
export default function About() {
|
||||
const dailyRequestLimit = Number(process.env.DAILY_REQUEST_LIMIT) || 20
|
||||
const dailyTokenLimit = Number(process.env.DAILY_TOKEN_LIMIT) || 500000
|
||||
const tpmLimit = Number(process.env.TPM_LIMIT) || 50000
|
||||
|
||||
return (
|
||||
<div className="min-h-screen bg-gray-50">
|
||||
{/* Navigation */}
|
||||
@@ -104,134 +93,15 @@ export default function About() {
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="relative mb-8 rounded-2xl bg-gradient-to-br from-amber-50 via-orange-50 to-rose-50 p-[1px] shadow-lg">
|
||||
<div className="absolute inset-0 rounded-2xl bg-gradient-to-br from-amber-400 via-orange-400 to-rose-400 opacity-20" />
|
||||
<div className="relative rounded-2xl bg-white/80 backdrop-blur-sm p-6">
|
||||
{/* Header */}
|
||||
<div className="mb-4">
|
||||
<h3 className="text-lg font-bold text-gray-900 tracking-tight">
|
||||
Model Change & Usage Limits{" "}
|
||||
<span className="text-sm text-amber-600 font-medium italic font-normal">
|
||||
(Or: Why My Wallet is Crying)
|
||||
</span>
|
||||
</h3>
|
||||
</div>
|
||||
|
||||
{/* Story */}
|
||||
<div className="space-y-3 text-sm text-gray-700 leading-relaxed mb-5">
|
||||
<p>
|
||||
The response to this project has been
|
||||
incredible—you all love making diagrams!
|
||||
However, this enthusiasm means we are
|
||||
frequently hitting the AI API rate limits
|
||||
(TPS/TPM). When this happens, the system
|
||||
pauses, leading to failed requests.
|
||||
</p>
|
||||
<p>
|
||||
Due to the high usage, I have changed the
|
||||
model from Claude to{" "}
|
||||
<span className="font-semibold text-amber-700">
|
||||
minimax-m2
|
||||
</span>
|
||||
, which is more cost-effective.
|
||||
</p>
|
||||
<p>
|
||||
As an{" "}
|
||||
<span className="font-semibold text-amber-700">
|
||||
indie developer
|
||||
</span>
|
||||
, I am currently footing the entire API
|
||||
bill. To keep the lights on and ensure the
|
||||
service remains available to everyone
|
||||
without sending me into debt, I have also
|
||||
implemented the following temporary caps:
|
||||
</p>
|
||||
</div>
|
||||
|
||||
{/* Limits Cards */}
|
||||
<div className="grid grid-cols-2 gap-3 mb-5">
|
||||
<div className="rounded-xl bg-gradient-to-br from-amber-100 to-orange-100 p-4 text-center">
|
||||
<div className="text-xs font-medium text-amber-700 uppercase tracking-wide mb-1">
|
||||
Token Usage
|
||||
</div>
|
||||
<div className="text-lg font-bold text-gray-900">
|
||||
{formatNumber(tpmLimit)}
|
||||
<span className="text-sm font-normal text-gray-600">
|
||||
/min
|
||||
</span>
|
||||
</div>
|
||||
<div className="text-lg font-bold text-gray-900">
|
||||
{formatNumber(dailyTokenLimit)}
|
||||
<span className="text-sm font-normal text-gray-600">
|
||||
/day
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
<div className="rounded-xl bg-gradient-to-br from-amber-100 to-orange-100 p-4 text-center">
|
||||
<div className="text-xs font-medium text-amber-700 uppercase tracking-wide mb-1">
|
||||
Daily Requests
|
||||
</div>
|
||||
<div className="text-2xl font-bold text-gray-900">
|
||||
{dailyRequestLimit}
|
||||
</div>
|
||||
<div className="text-sm text-gray-600">
|
||||
requests
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Divider */}
|
||||
<div className="flex items-center gap-3 my-5">
|
||||
<div className="flex-1 h-px bg-gradient-to-r from-transparent via-amber-300 to-transparent" />
|
||||
</div>
|
||||
|
||||
{/* Bring Your Own Key */}
|
||||
<div className="text-center mb-5">
|
||||
<h4 className="text-base font-bold text-gray-900 mb-2">
|
||||
Bring Your Own API Key
|
||||
</h4>
|
||||
<p className="text-sm text-gray-600 mb-2 max-w-md mx-auto">
|
||||
You can use your own API key to bypass these
|
||||
limits. Click the Settings icon in the chat
|
||||
panel to configure your provider and API
|
||||
key.
|
||||
</p>
|
||||
<p className="text-xs text-gray-500 max-w-md mx-auto">
|
||||
Your key is stored locally in your browser
|
||||
and is never stored on the server.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
{/* Divider */}
|
||||
<div className="flex items-center gap-3 mb-5">
|
||||
<div className="flex-1 h-px bg-gradient-to-r from-transparent via-amber-300 to-transparent" />
|
||||
</div>
|
||||
|
||||
{/* Sponsorship CTA */}
|
||||
<div className="text-center">
|
||||
<h4 className="text-base font-bold text-gray-900 mb-2">
|
||||
Call for Sponsorship
|
||||
</h4>
|
||||
<p className="text-sm text-gray-600 mb-4 max-w-md mx-auto">
|
||||
Scaling the backend is the only way to
|
||||
remove these limits. I am actively seeking
|
||||
sponsorship from AI API providers or Cloud
|
||||
Platforms.
|
||||
</p>
|
||||
<p className="text-sm text-gray-600 mb-4 max-w-md mx-auto">
|
||||
In return for support (credits or funding),
|
||||
I will prominently feature your company as a
|
||||
platform sponsor on both the GitHub
|
||||
repository and the live demo site.
|
||||
</p>
|
||||
<a
|
||||
href="mailto:me@jiang.jp"
|
||||
className="inline-flex items-center gap-2 px-5 py-2.5 rounded-full bg-gradient-to-r from-amber-500 to-orange-500 text-white font-medium text-sm shadow-md hover:shadow-lg hover:scale-105 transition-all duration-200"
|
||||
>
|
||||
Contact Me
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
<div className="bg-amber-50 border border-amber-200 rounded-lg p-4 mb-6">
|
||||
<p className="text-amber-800">
|
||||
This app is designed to run on Claude Opus 4.5 for
|
||||
best performance. However, due to
|
||||
higher-than-expected traffic, running the top-tier
|
||||
model has become cost-prohibitive. To avoid service
|
||||
interruptions and manage costs, I have switched the
|
||||
backend to Claude Haiku 4.5.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<p className="text-gray-700">
|
||||
|
||||
@@ -1,14 +1,12 @@
|
||||
import {
|
||||
APICallError,
|
||||
convertToModelMessages,
|
||||
createUIMessageStream,
|
||||
createUIMessageStreamResponse,
|
||||
LoadAPIKeyError,
|
||||
stepCountIs,
|
||||
streamText,
|
||||
} from "ai"
|
||||
import { z } from "zod"
|
||||
import { getAIModel, supportsPromptCaching } from "@/lib/ai-providers"
|
||||
import { getAIModel } from "@/lib/ai-providers"
|
||||
import { findCachedResponse } from "@/lib/cached-responses"
|
||||
import {
|
||||
getTelemetryConfig,
|
||||
@@ -18,7 +16,7 @@ import {
|
||||
} from "@/lib/langfuse"
|
||||
import { getSystemPrompt } from "@/lib/system-prompts"
|
||||
|
||||
export const maxDuration = 120
|
||||
export const maxDuration = 60
|
||||
|
||||
// File upload limits (must match client-side)
|
||||
const MAX_FILE_SIZE = 2 * 1024 * 1024 // 2MB
|
||||
@@ -66,32 +64,32 @@ function isMinimalDiagram(xml: string): boolean {
|
||||
return !stripped.includes('id="2"')
|
||||
}
|
||||
|
||||
// Helper function to replace historical tool call XML with placeholders
|
||||
// This reduces token usage and forces LLM to rely on the current diagram XML (source of truth)
|
||||
function replaceHistoricalToolInputs(messages: any[]): any[] {
|
||||
// Helper function to fix tool call inputs for Bedrock API
|
||||
// Bedrock requires toolUse.input to be a JSON object, not a string
|
||||
function fixToolCallInputs(messages: any[]): any[] {
|
||||
return messages.map((msg) => {
|
||||
if (msg.role !== "assistant" || !Array.isArray(msg.content)) {
|
||||
return msg
|
||||
}
|
||||
const replacedContent = msg.content.map((part: any) => {
|
||||
const fixedContent = msg.content.map((part: any) => {
|
||||
if (part.type === "tool-call") {
|
||||
const toolName = part.toolName
|
||||
if (
|
||||
toolName === "display_diagram" ||
|
||||
toolName === "edit_diagram"
|
||||
) {
|
||||
return {
|
||||
...part,
|
||||
input: {
|
||||
placeholder:
|
||||
"[XML content replaced - see current diagram XML in system context]",
|
||||
},
|
||||
if (typeof part.input === "string") {
|
||||
try {
|
||||
const parsed = JSON.parse(part.input)
|
||||
return { ...part, input: parsed }
|
||||
} catch {
|
||||
// If parsing fails, wrap the string in an object
|
||||
return { ...part, input: { rawInput: part.input } }
|
||||
}
|
||||
}
|
||||
// Input is already an object, but verify it's not null/undefined
|
||||
if (part.input === null || part.input === undefined) {
|
||||
return { ...part, input: {} }
|
||||
}
|
||||
}
|
||||
return part
|
||||
})
|
||||
return { ...msg, content: replacedContent }
|
||||
return { ...msg, content: fixedContent }
|
||||
})
|
||||
}
|
||||
|
||||
@@ -144,7 +142,7 @@ async function handleChatRequest(req: Request): Promise<Response> {
|
||||
}
|
||||
}
|
||||
|
||||
const { messages, xml, previousXml, sessionId } = await req.json()
|
||||
const { messages, xml, sessionId } = await req.json()
|
||||
|
||||
// Get user IP for Langfuse tracking
|
||||
const forwardedFor = req.headers.get("x-forwarded-for")
|
||||
@@ -157,9 +155,9 @@ async function handleChatRequest(req: Request): Promise<Response> {
|
||||
: undefined
|
||||
|
||||
// Extract user input text for Langfuse trace
|
||||
const lastMessage = messages[messages.length - 1]
|
||||
const currentMessage = messages[messages.length - 1]
|
||||
const userInputText =
|
||||
lastMessage?.parts?.find((p: any) => p.type === "text")?.text || ""
|
||||
currentMessage?.parts?.find((p: any) => p.type === "text")?.text || ""
|
||||
|
||||
// Update Langfuse trace with input, session, and user
|
||||
setTraceInput({
|
||||
@@ -179,40 +177,59 @@ async function handleChatRequest(req: Request): Promise<Response> {
|
||||
const isFirstMessage = messages.length === 1
|
||||
const isEmptyDiagram = !xml || xml.trim() === "" || isMinimalDiagram(xml)
|
||||
|
||||
// DEBUG: Log cache check conditions
|
||||
console.log("[Cache DEBUG] messages.length:", messages.length)
|
||||
console.log("[Cache DEBUG] isFirstMessage:", isFirstMessage)
|
||||
console.log("[Cache DEBUG] xml length:", xml?.length || 0)
|
||||
console.log("[Cache DEBUG] xml preview:", xml?.substring(0, 200))
|
||||
console.log("[Cache DEBUG] isEmptyDiagram:", isEmptyDiagram)
|
||||
|
||||
if (isFirstMessage && isEmptyDiagram) {
|
||||
const lastMessage = messages[0]
|
||||
const textPart = lastMessage.parts?.find((p: any) => p.type === "text")
|
||||
const filePart = lastMessage.parts?.find((p: any) => p.type === "file")
|
||||
|
||||
console.log("[Cache DEBUG] textPart?.text:", textPart?.text)
|
||||
console.log("[Cache DEBUG] hasFilePart:", !!filePart)
|
||||
|
||||
const cached = findCachedResponse(textPart?.text || "", !!filePart)
|
||||
|
||||
console.log("[Cache DEBUG] cached found:", !!cached)
|
||||
|
||||
if (cached) {
|
||||
console.log(
|
||||
"[Cache] Returning cached response for:",
|
||||
textPart?.text,
|
||||
)
|
||||
return createCachedStreamResponse(cached.xml)
|
||||
} else {
|
||||
console.log("[Cache DEBUG] No cache match - checking why...")
|
||||
console.log(
|
||||
"[Cache DEBUG] Exact promptText:",
|
||||
JSON.stringify(textPart?.text),
|
||||
)
|
||||
}
|
||||
} else {
|
||||
console.log("[Cache DEBUG] Skipping cache check - conditions not met")
|
||||
if (!isFirstMessage)
|
||||
console.log("[Cache DEBUG] Reason: not first message")
|
||||
if (!isEmptyDiagram)
|
||||
console.log("[Cache DEBUG] Reason: diagram not empty")
|
||||
}
|
||||
// === CACHE CHECK END ===
|
||||
|
||||
// Read client AI provider overrides from headers
|
||||
const clientOverrides = {
|
||||
provider: req.headers.get("x-ai-provider"),
|
||||
baseUrl: req.headers.get("x-ai-base-url"),
|
||||
apiKey: req.headers.get("x-ai-api-key"),
|
||||
modelId: req.headers.get("x-ai-model"),
|
||||
}
|
||||
|
||||
// Get AI model with optional client overrides
|
||||
const { model, providerOptions, headers, modelId } =
|
||||
getAIModel(clientOverrides)
|
||||
|
||||
// Check if model supports prompt caching
|
||||
const shouldCache = supportsPromptCaching(modelId)
|
||||
console.log(
|
||||
`[Prompt Caching] ${shouldCache ? "ENABLED" : "DISABLED"} for model: ${modelId}`,
|
||||
)
|
||||
// Get AI model from environment configuration
|
||||
const { model, providerOptions, headers, modelId } = getAIModel()
|
||||
|
||||
// Get the appropriate system prompt based on model (extended for Opus/Haiku 4.5)
|
||||
const systemMessage = getSystemPrompt(modelId)
|
||||
|
||||
const lastMessage = messages[messages.length - 1]
|
||||
|
||||
// Extract text from the last message parts
|
||||
const lastMessageText =
|
||||
lastMessage.parts?.find((part: any) => part.type === "text")?.text || ""
|
||||
|
||||
// Extract file parts (images) from the last message
|
||||
const fileParts =
|
||||
lastMessage.parts?.filter((part: any) => part.type === "file") || []
|
||||
@@ -220,23 +237,40 @@ async function handleChatRequest(req: Request): Promise<Response> {
|
||||
// User input only - XML is now in a separate cached system message
|
||||
const formattedUserInput = `User input:
|
||||
"""md
|
||||
${userInputText}
|
||||
${lastMessageText}
|
||||
"""`
|
||||
|
||||
// Convert UIMessages to ModelMessages and add system message
|
||||
const modelMessages = convertToModelMessages(messages)
|
||||
|
||||
// Replace historical tool call XML with placeholders to reduce tokens
|
||||
// Disabled by default - some models (e.g. minimax) copy placeholders instead of generating XML
|
||||
const enableHistoryReplace =
|
||||
process.env.ENABLE_HISTORY_XML_REPLACE === "true"
|
||||
const placeholderMessages = enableHistoryReplace
|
||||
? replaceHistoricalToolInputs(modelMessages)
|
||||
: modelMessages
|
||||
// Debug: log raw messages to see what's coming in
|
||||
console.log(
|
||||
"[DEBUG] Raw UI messages:",
|
||||
JSON.stringify(
|
||||
messages.map((m: any, i: number) => ({
|
||||
index: i,
|
||||
role: m.role,
|
||||
partsCount: m.parts?.length,
|
||||
parts: m.parts?.map((p: any) => ({
|
||||
type: p.type,
|
||||
toolName: p.toolName,
|
||||
toolCallId: p.toolCallId,
|
||||
state: p.state,
|
||||
inputType: p.input ? typeof p.input : undefined,
|
||||
input: p.input,
|
||||
})),
|
||||
})),
|
||||
null,
|
||||
2,
|
||||
),
|
||||
)
|
||||
|
||||
// Fix tool call inputs for Bedrock API (requires JSON objects, not strings)
|
||||
const fixedMessages = fixToolCallInputs(modelMessages)
|
||||
|
||||
// Filter out messages with empty content arrays (Bedrock API rejects these)
|
||||
// This is a safety measure - ideally convertToModelMessages should handle all cases
|
||||
let enhancedMessages = placeholderMessages.filter(
|
||||
let enhancedMessages = fixedMessages.filter(
|
||||
(msg: any) =>
|
||||
msg.content && Array.isArray(msg.content) && msg.content.length > 0,
|
||||
)
|
||||
@@ -269,7 +303,7 @@ ${userInputText}
|
||||
// Add cache point to the last assistant message in conversation history
|
||||
// This caches the entire conversation prefix for subsequent requests
|
||||
// Strategy: system (cached) + history with last assistant (cached) + new user message
|
||||
if (shouldCache && enhancedMessages.length >= 2) {
|
||||
if (enhancedMessages.length >= 2) {
|
||||
// Find the last assistant message (should be second-to-last, before current user message)
|
||||
for (let i = enhancedMessages.length - 2; i >= 0; i--) {
|
||||
if (enhancedMessages[i].role === "assistant") {
|
||||
@@ -294,21 +328,17 @@ ${userInputText}
|
||||
{
|
||||
role: "system" as const,
|
||||
content: systemMessage,
|
||||
...(shouldCache && {
|
||||
providerOptions: {
|
||||
bedrock: { cachePoint: { type: "default" } },
|
||||
},
|
||||
}),
|
||||
providerOptions: {
|
||||
bedrock: { cachePoint: { type: "default" } },
|
||||
},
|
||||
},
|
||||
// Cache breakpoint 2: Previous and Current diagram XML context
|
||||
// Cache breakpoint 2: Current diagram XML context
|
||||
{
|
||||
role: "system" as const,
|
||||
content: `${previousXml ? `Previous diagram XML (before user's last message):\n"""xml\n${previousXml}\n"""\n\n` : ""}Current diagram XML (AUTHORITATIVE - the source of truth):\n"""xml\n${xml || ""}\n"""\n\nIMPORTANT: The "Current diagram XML" is the SINGLE SOURCE OF TRUTH for what's on the canvas right now. The user can manually add, delete, or modify shapes directly in draw.io. Always count and describe elements based on the CURRENT XML, not on what you previously generated. If both previous and current XML are shown, compare them to understand what the user changed. When using edit_diagram, COPY search patterns exactly from the CURRENT XML - attribute order matters!`,
|
||||
...(shouldCache && {
|
||||
providerOptions: {
|
||||
bedrock: { cachePoint: { type: "default" } },
|
||||
},
|
||||
}),
|
||||
content: `Current diagram XML:\n"""xml\n${xml || ""}\n"""\nWhen using edit_diagram, COPY search patterns exactly from this XML - attribute order matters!`,
|
||||
providerOptions: {
|
||||
bedrock: { cachePoint: { type: "default" } },
|
||||
},
|
||||
},
|
||||
]
|
||||
|
||||
@@ -316,12 +346,9 @@ ${userInputText}
|
||||
|
||||
const result = streamText({
|
||||
model,
|
||||
...(process.env.MAX_OUTPUT_TOKENS && {
|
||||
maxOutputTokens: parseInt(process.env.MAX_OUTPUT_TOKENS, 10),
|
||||
}),
|
||||
stopWhen: stepCountIs(5),
|
||||
messages: allMessages,
|
||||
...(providerOptions && { providerOptions }), // This now includes all reasoning configs
|
||||
...(providerOptions && { providerOptions }),
|
||||
...(headers && { headers }),
|
||||
// Langfuse telemetry config (returns undefined if not configured)
|
||||
...(getTelemetryConfig({ sessionId: validSessionId, userId }) && {
|
||||
@@ -330,8 +357,40 @@ ${userInputText}
|
||||
userId,
|
||||
}),
|
||||
}),
|
||||
onFinish: ({ text, usage }) => {
|
||||
// Repair malformed tool calls (model sometimes generates invalid JSON with unescaped quotes)
|
||||
experimental_repairToolCall: async ({ toolCall }) => {
|
||||
// The toolCall.input contains the raw JSON string that failed to parse
|
||||
const rawJson =
|
||||
typeof toolCall.input === "string" ? toolCall.input : null
|
||||
|
||||
if (rawJson) {
|
||||
try {
|
||||
// Fix unescaped quotes: x="520" should be x=\"520\"
|
||||
const fixed = rawJson.replace(
|
||||
/([a-zA-Z])="(\d+)"/g,
|
||||
'$1=\\"$2\\"',
|
||||
)
|
||||
const parsed = JSON.parse(fixed)
|
||||
return {
|
||||
type: "tool-call" as const,
|
||||
toolCallId: toolCall.toolCallId,
|
||||
toolName: toolCall.toolName,
|
||||
input: JSON.stringify(parsed),
|
||||
}
|
||||
} catch {
|
||||
// Repair failed, return null
|
||||
}
|
||||
}
|
||||
return null
|
||||
},
|
||||
onFinish: ({ text, usage, providerMetadata }) => {
|
||||
console.log(
|
||||
"[Cache] Full providerMetadata:",
|
||||
JSON.stringify(providerMetadata, null, 2),
|
||||
)
|
||||
console.log("[Cache] Usage:", JSON.stringify(usage, null, 2))
|
||||
// Pass usage to Langfuse (Bedrock streaming doesn't auto-report tokens to telemetry)
|
||||
// AI SDK uses inputTokens/outputTokens, Langfuse expects promptTokens/completionTokens
|
||||
setTraceOutput(text, {
|
||||
promptTokens: usage?.inputTokens,
|
||||
completionTokens: usage?.outputTokens,
|
||||
@@ -417,91 +476,7 @@ IMPORTANT: Keep edits concise:
|
||||
}),
|
||||
})
|
||||
|
||||
return result.toUIMessageStreamResponse({
|
||||
sendReasoning: true,
|
||||
messageMetadata: ({ part }) => {
|
||||
if (part.type === "finish") {
|
||||
const usage = (part as any).totalUsage
|
||||
if (!usage) {
|
||||
console.warn(
|
||||
"[messageMetadata] No usage data in finish part",
|
||||
)
|
||||
return undefined
|
||||
}
|
||||
// Total input = non-cached + cached (these are separate counts)
|
||||
// Note: cacheWriteInputTokens is not available on finish part
|
||||
const totalInputTokens =
|
||||
(usage.inputTokens ?? 0) + (usage.cachedInputTokens ?? 0)
|
||||
return {
|
||||
inputTokens: totalInputTokens,
|
||||
outputTokens: usage.outputTokens ?? 0,
|
||||
}
|
||||
}
|
||||
return undefined
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// Helper to categorize errors and return appropriate response
|
||||
function handleError(error: unknown): Response {
|
||||
console.error("Error in chat route:", error)
|
||||
|
||||
const isDev = process.env.NODE_ENV === "development"
|
||||
|
||||
// Check for specific AI SDK error types
|
||||
if (APICallError.isInstance(error)) {
|
||||
return Response.json(
|
||||
{
|
||||
error: error.message,
|
||||
...(isDev && {
|
||||
details: error.responseBody,
|
||||
stack: error.stack,
|
||||
}),
|
||||
},
|
||||
{ status: error.statusCode || 500 },
|
||||
)
|
||||
}
|
||||
|
||||
if (LoadAPIKeyError.isInstance(error)) {
|
||||
return Response.json(
|
||||
{
|
||||
error: "Authentication failed. Please check your API key.",
|
||||
...(isDev && {
|
||||
stack: error.stack,
|
||||
}),
|
||||
},
|
||||
{ status: 401 },
|
||||
)
|
||||
}
|
||||
|
||||
// Fallback for other errors with safety filter
|
||||
const message =
|
||||
error instanceof Error ? error.message : "An unexpected error occurred"
|
||||
const status = (error as any)?.statusCode || (error as any)?.status || 500
|
||||
|
||||
// Prevent leaking API keys, tokens, or other sensitive data
|
||||
const lowerMessage = message.toLowerCase()
|
||||
const safeMessage =
|
||||
lowerMessage.includes("key") ||
|
||||
lowerMessage.includes("token") ||
|
||||
lowerMessage.includes("sig") ||
|
||||
lowerMessage.includes("signature") ||
|
||||
lowerMessage.includes("secret") ||
|
||||
lowerMessage.includes("password") ||
|
||||
lowerMessage.includes("credential")
|
||||
? "Authentication failed. Please check your credentials."
|
||||
: message
|
||||
|
||||
return Response.json(
|
||||
{
|
||||
error: safeMessage,
|
||||
...(isDev && {
|
||||
details: message,
|
||||
stack: error instanceof Error ? error.stack : undefined,
|
||||
}),
|
||||
},
|
||||
{ status },
|
||||
)
|
||||
return result.toUIMessageStreamResponse()
|
||||
}
|
||||
|
||||
// Wrap handler with error handling
|
||||
@@ -509,7 +484,11 @@ async function safeHandler(req: Request): Promise<Response> {
|
||||
try {
|
||||
return await handleChatRequest(req)
|
||||
} catch (error) {
|
||||
return handleError(error)
|
||||
console.error("Error in chat route:", error)
|
||||
return Response.json(
|
||||
{ error: "Internal server error" },
|
||||
{ status: 500 },
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,10 +1,12 @@
|
||||
import { NextResponse } from "next/server"
|
||||
|
||||
export async function GET() {
|
||||
const accessCodes =
|
||||
process.env.ACCESS_CODE_LIST?.split(",")
|
||||
.map((code) => code.trim())
|
||||
.filter(Boolean) || []
|
||||
|
||||
return NextResponse.json({
|
||||
accessCodeRequired: !!process.env.ACCESS_CODE_LIST,
|
||||
dailyRequestLimit: Number(process.env.DAILY_REQUEST_LIMIT) || 0,
|
||||
dailyTokenLimit: Number(process.env.DAILY_TOKEN_LIMIT) || 0,
|
||||
tpmLimit: Number(process.env.TPM_LIMIT) || 0,
|
||||
accessCodeRequired: accessCodes.length > 0,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import { GoogleAnalytics } from "@next/third-parties/google"
|
||||
import { Analytics } from "@vercel/analytics/react"
|
||||
import type { Metadata, Viewport } from "next"
|
||||
import { JetBrains_Mono, Plus_Jakarta_Sans } from "next/font/google"
|
||||
import { DiagramProvider } from "@/contexts/diagram-context"
|
||||
@@ -105,7 +106,7 @@ export default function RootLayout({
|
||||
}
|
||||
|
||||
return (
|
||||
<html lang="en" suppressHydrationWarning>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<script
|
||||
type="application/ld+json"
|
||||
@@ -116,6 +117,7 @@ export default function RootLayout({
|
||||
className={`${plusJakarta.variable} ${jetbrainsMono.variable} antialiased`}
|
||||
>
|
||||
<DiagramProvider>{children}</DiagramProvider>
|
||||
<Analytics />
|
||||
</body>
|
||||
{process.env.NEXT_PUBLIC_GA_ID && (
|
||||
<GoogleAnalytics gaId={process.env.NEXT_PUBLIC_GA_ID} />
|
||||
|
||||
91
app/page.tsx
91
app/page.tsx
@@ -11,63 +11,33 @@ import {
|
||||
} from "@/components/ui/resizable"
|
||||
import { useDiagram } from "@/contexts/diagram-context"
|
||||
|
||||
const drawioBaseUrl =
|
||||
process.env.NEXT_PUBLIC_DRAWIO_BASE_URL || "https://embed.diagrams.net"
|
||||
|
||||
export default function Home() {
|
||||
const { drawioRef, handleDiagramExport, onDrawioLoad, resetDrawioReady } =
|
||||
useDiagram()
|
||||
const { drawioRef, handleDiagramExport, onDrawioLoad } = useDiagram()
|
||||
const [isMobile, setIsMobile] = useState(false)
|
||||
const [isChatVisible, setIsChatVisible] = useState(true)
|
||||
const [drawioUi, setDrawioUi] = useState<"min" | "sketch">("min")
|
||||
const [darkMode, setDarkMode] = useState(false)
|
||||
const [isLoaded, setIsLoaded] = useState(false)
|
||||
const [isThemeLoaded, setIsThemeLoaded] = useState(false)
|
||||
|
||||
// Load theme from localStorage after mount to avoid hydration mismatch
|
||||
useEffect(() => {
|
||||
const saved = localStorage.getItem("drawio-theme")
|
||||
if (saved === "min" || saved === "sketch") {
|
||||
setDrawioUi(saved)
|
||||
}
|
||||
setIsThemeLoaded(true)
|
||||
}, [])
|
||||
const [closeProtection, setCloseProtection] = useState(false)
|
||||
|
||||
const chatPanelRef = useRef<ImperativePanelHandle>(null)
|
||||
|
||||
// Load preferences from localStorage after mount
|
||||
// Load close protection setting from localStorage after mount
|
||||
useEffect(() => {
|
||||
const savedUi = localStorage.getItem("drawio-theme")
|
||||
if (savedUi === "min" || savedUi === "sketch") {
|
||||
setDrawioUi(savedUi)
|
||||
}
|
||||
|
||||
const savedDarkMode = localStorage.getItem("next-ai-draw-io-dark-mode")
|
||||
if (savedDarkMode !== null) {
|
||||
// Use saved preference
|
||||
const isDark = savedDarkMode === "true"
|
||||
setDarkMode(isDark)
|
||||
document.documentElement.classList.toggle("dark", isDark)
|
||||
} else {
|
||||
// First visit: match browser preference
|
||||
const prefersDark = window.matchMedia(
|
||||
"(prefers-color-scheme: dark)",
|
||||
).matches
|
||||
setDarkMode(prefersDark)
|
||||
document.documentElement.classList.toggle("dark", prefersDark)
|
||||
}
|
||||
|
||||
const savedCloseProtection = localStorage.getItem(
|
||||
STORAGE_CLOSE_PROTECTION_KEY,
|
||||
)
|
||||
if (savedCloseProtection === "true") {
|
||||
const saved = localStorage.getItem(STORAGE_CLOSE_PROTECTION_KEY)
|
||||
// Default to false since auto-save handles persistence
|
||||
if (saved === "true") {
|
||||
setCloseProtection(true)
|
||||
}
|
||||
|
||||
setIsLoaded(true)
|
||||
}, [])
|
||||
const chatPanelRef = useRef<ImperativePanelHandle>(null)
|
||||
|
||||
const toggleDarkMode = () => {
|
||||
const newValue = !darkMode
|
||||
setDarkMode(newValue)
|
||||
localStorage.setItem("next-ai-draw-io-dark-mode", String(newValue))
|
||||
document.documentElement.classList.toggle("dark", newValue)
|
||||
// Reset so onDrawioLoad fires again after remount
|
||||
resetDrawioReady()
|
||||
}
|
||||
|
||||
// Check mobile
|
||||
useEffect(() => {
|
||||
const checkMobile = () => {
|
||||
setIsMobile(window.innerWidth < 768)
|
||||
@@ -91,7 +61,6 @@ export default function Home() {
|
||||
}
|
||||
}
|
||||
|
||||
// Keyboard shortcut for toggling chat panel
|
||||
useEffect(() => {
|
||||
const handleKeyDown = (event: KeyboardEvent) => {
|
||||
if ((event.ctrlKey || event.metaKey) && event.key === "b") {
|
||||
@@ -105,6 +74,7 @@ export default function Home() {
|
||||
}, [])
|
||||
|
||||
// Show confirmation dialog when user tries to leave the page
|
||||
// This helps prevent accidental navigation from browser back gestures
|
||||
useEffect(() => {
|
||||
if (!closeProtection) return
|
||||
|
||||
@@ -121,41 +91,34 @@ export default function Home() {
|
||||
return (
|
||||
<div className="h-screen bg-background relative overflow-hidden">
|
||||
<ResizablePanelGroup
|
||||
id="main-panel-group"
|
||||
key={isMobile ? "mobile" : "desktop"}
|
||||
direction={isMobile ? "vertical" : "horizontal"}
|
||||
className="h-full"
|
||||
>
|
||||
{/* Draw.io Canvas */}
|
||||
<ResizablePanel
|
||||
id="drawio-panel"
|
||||
defaultSize={isMobile ? 50 : 67}
|
||||
minSize={20}
|
||||
>
|
||||
<ResizablePanel defaultSize={isMobile ? 50 : 67} minSize={20}>
|
||||
<div
|
||||
className={`h-full relative ${
|
||||
isMobile ? "p-1" : "p-2"
|
||||
}`}
|
||||
>
|
||||
<div className="h-full rounded-xl overflow-hidden shadow-soft-lg border border-border/30">
|
||||
{isLoaded ? (
|
||||
<div className="h-full rounded-xl overflow-hidden shadow-soft-lg border border-border/30 bg-white">
|
||||
{isThemeLoaded ? (
|
||||
<DrawIoEmbed
|
||||
key={`${drawioUi}-${darkMode}`}
|
||||
key={drawioUi}
|
||||
ref={drawioRef}
|
||||
onExport={handleDiagramExport}
|
||||
onLoad={onDrawioLoad}
|
||||
baseUrl={drawioBaseUrl}
|
||||
urlParameters={{
|
||||
ui: drawioUi,
|
||||
spin: true,
|
||||
libraries: false,
|
||||
saveAndExit: false,
|
||||
noExitBtn: true,
|
||||
dark: darkMode,
|
||||
}}
|
||||
/>
|
||||
) : (
|
||||
<div className="h-full w-full flex items-center justify-center bg-background">
|
||||
<div className="h-full w-full flex items-center justify-center">
|
||||
<div className="animate-spin h-8 w-8 border-4 border-primary border-t-transparent rounded-full" />
|
||||
</div>
|
||||
)}
|
||||
@@ -167,7 +130,6 @@ export default function Home() {
|
||||
|
||||
{/* Chat Panel */}
|
||||
<ResizablePanel
|
||||
id="chat-panel"
|
||||
ref={chatPanelRef}
|
||||
defaultSize={isMobile ? 50 : 33}
|
||||
minSize={isMobile ? 20 : 15}
|
||||
@@ -183,14 +145,11 @@ export default function Home() {
|
||||
onToggleVisibility={toggleChatPanel}
|
||||
drawioUi={drawioUi}
|
||||
onToggleDrawioUi={() => {
|
||||
const newUi =
|
||||
const newTheme =
|
||||
drawioUi === "min" ? "sketch" : "min"
|
||||
localStorage.setItem("drawio-theme", newUi)
|
||||
setDrawioUi(newUi)
|
||||
resetDrawioReady()
|
||||
localStorage.setItem("drawio-theme", newTheme)
|
||||
setDrawioUi(newTheme)
|
||||
}}
|
||||
darkMode={darkMode}
|
||||
onToggleDarkMode={toggleDarkMode}
|
||||
isMobile={isMobile}
|
||||
onCloseProtectionChange={setCloseProtection}
|
||||
/>
|
||||
|
||||
3
cloudflare-env.d.ts
vendored
Normal file
3
cloudflare-env.d.ts
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
interface CloudflareEnv {
|
||||
ASSETS: Fetcher
|
||||
}
|
||||
@@ -1,186 +0,0 @@
|
||||
"use client"
|
||||
|
||||
import { useControllableState } from "@radix-ui/react-use-controllable-state"
|
||||
import { BrainIcon, ChevronDownIcon } from "lucide-react"
|
||||
import type { ComponentProps, ReactNode } from "react"
|
||||
import { createContext, memo, useContext, useEffect, useState } from "react"
|
||||
import {
|
||||
Collapsible,
|
||||
CollapsibleContent,
|
||||
CollapsibleTrigger,
|
||||
} from "@/components/ui/collapsible"
|
||||
import { cn } from "@/lib/utils"
|
||||
import { Shimmer } from "./shimmer"
|
||||
|
||||
type ReasoningContextValue = {
|
||||
isStreaming: boolean
|
||||
isOpen: boolean
|
||||
setIsOpen: (open: boolean) => void
|
||||
duration: number | undefined
|
||||
}
|
||||
|
||||
const ReasoningContext = createContext<ReasoningContextValue | null>(null)
|
||||
|
||||
export const useReasoning = () => {
|
||||
const context = useContext(ReasoningContext)
|
||||
if (!context) {
|
||||
throw new Error("Reasoning components must be used within Reasoning")
|
||||
}
|
||||
return context
|
||||
}
|
||||
|
||||
export type ReasoningProps = ComponentProps<typeof Collapsible> & {
|
||||
isStreaming?: boolean
|
||||
open?: boolean
|
||||
defaultOpen?: boolean
|
||||
onOpenChange?: (open: boolean) => void
|
||||
duration?: number
|
||||
}
|
||||
|
||||
const AUTO_CLOSE_DELAY = 1000
|
||||
const MS_IN_S = 1000
|
||||
|
||||
export const Reasoning = memo(
|
||||
({
|
||||
className,
|
||||
isStreaming = false,
|
||||
open,
|
||||
defaultOpen = true,
|
||||
onOpenChange,
|
||||
duration: durationProp,
|
||||
children,
|
||||
...props
|
||||
}: ReasoningProps) => {
|
||||
const [isOpen, setIsOpen] = useControllableState({
|
||||
prop: open,
|
||||
defaultProp: defaultOpen,
|
||||
onChange: onOpenChange,
|
||||
})
|
||||
const [duration, setDuration] = useControllableState({
|
||||
prop: durationProp,
|
||||
defaultProp: undefined,
|
||||
})
|
||||
|
||||
const [hasAutoClosed, setHasAutoClosed] = useState(false)
|
||||
const [startTime, setStartTime] = useState<number | null>(null)
|
||||
|
||||
// Track duration when streaming starts and ends
|
||||
useEffect(() => {
|
||||
if (isStreaming) {
|
||||
if (startTime === null) {
|
||||
setStartTime(Date.now())
|
||||
}
|
||||
} else if (startTime !== null) {
|
||||
setDuration(Math.ceil((Date.now() - startTime) / MS_IN_S))
|
||||
setStartTime(null)
|
||||
}
|
||||
}, [isStreaming, startTime, setDuration])
|
||||
|
||||
// Auto-open when streaming starts, auto-close when streaming ends (once only)
|
||||
useEffect(() => {
|
||||
if (defaultOpen && !isStreaming && isOpen && !hasAutoClosed) {
|
||||
// Add a small delay before closing to allow user to see the content
|
||||
const timer = setTimeout(() => {
|
||||
setIsOpen(false)
|
||||
setHasAutoClosed(true)
|
||||
}, AUTO_CLOSE_DELAY)
|
||||
|
||||
return () => clearTimeout(timer)
|
||||
}
|
||||
}, [isStreaming, isOpen, defaultOpen, setIsOpen, hasAutoClosed])
|
||||
|
||||
const handleOpenChange = (newOpen: boolean) => {
|
||||
setIsOpen(newOpen)
|
||||
}
|
||||
|
||||
return (
|
||||
<ReasoningContext.Provider
|
||||
value={{ isStreaming, isOpen, setIsOpen, duration }}
|
||||
>
|
||||
<Collapsible
|
||||
className={cn("not-prose mb-4", className)}
|
||||
onOpenChange={handleOpenChange}
|
||||
open={isOpen}
|
||||
{...props}
|
||||
>
|
||||
{children}
|
||||
</Collapsible>
|
||||
</ReasoningContext.Provider>
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
export type ReasoningTriggerProps = ComponentProps<
|
||||
typeof CollapsibleTrigger
|
||||
> & {
|
||||
getThinkingMessage?: (isStreaming: boolean, duration?: number) => ReactNode
|
||||
}
|
||||
|
||||
const defaultGetThinkingMessage = (isStreaming: boolean, duration?: number) => {
|
||||
if (isStreaming || duration === 0) {
|
||||
return <Shimmer duration={1}>Thinking...</Shimmer>
|
||||
}
|
||||
if (duration === undefined) {
|
||||
return <p>Thought for a few seconds</p>
|
||||
}
|
||||
return <p>Thought for {duration} seconds</p>
|
||||
}
|
||||
|
||||
export const ReasoningTrigger = memo(
|
||||
({
|
||||
className,
|
||||
children,
|
||||
getThinkingMessage = defaultGetThinkingMessage,
|
||||
...props
|
||||
}: ReasoningTriggerProps) => {
|
||||
const { isStreaming, isOpen, duration } = useReasoning()
|
||||
|
||||
return (
|
||||
<CollapsibleTrigger
|
||||
className={cn(
|
||||
"flex w-full items-center gap-2 text-muted-foreground text-sm transition-colors hover:text-foreground",
|
||||
className,
|
||||
)}
|
||||
{...props}
|
||||
>
|
||||
{children ?? (
|
||||
<>
|
||||
<BrainIcon className="size-4" />
|
||||
{getThinkingMessage(isStreaming, duration)}
|
||||
<ChevronDownIcon
|
||||
className={cn(
|
||||
"size-4 transition-transform",
|
||||
isOpen ? "rotate-180" : "rotate-0",
|
||||
)}
|
||||
/>
|
||||
</>
|
||||
)}
|
||||
</CollapsibleTrigger>
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
export type ReasoningContentProps = ComponentProps<
|
||||
typeof CollapsibleContent
|
||||
> & {
|
||||
children: string
|
||||
}
|
||||
|
||||
export const ReasoningContent = memo(
|
||||
({ className, children, ...props }: ReasoningContentProps) => (
|
||||
<CollapsibleContent
|
||||
className={cn(
|
||||
"mt-4 text-sm",
|
||||
"data-[state=closed]:fade-out-0 data-[state=closed]:slide-out-to-top-2 data-[state=open]:slide-in-from-top-2 text-muted-foreground outline-none data-[state=closed]:animate-out data-[state=open]:animate-in",
|
||||
className,
|
||||
)}
|
||||
{...props}
|
||||
>
|
||||
<div className="whitespace-pre-wrap">{children}</div>
|
||||
</CollapsibleContent>
|
||||
),
|
||||
)
|
||||
|
||||
Reasoning.displayName = "Reasoning"
|
||||
ReasoningTrigger.displayName = "ReasoningTrigger"
|
||||
ReasoningContent.displayName = "ReasoningContent"
|
||||
@@ -1,64 +0,0 @@
|
||||
"use client"
|
||||
|
||||
import { motion } from "motion/react"
|
||||
import {
|
||||
type CSSProperties,
|
||||
type ElementType,
|
||||
type JSX,
|
||||
memo,
|
||||
useMemo,
|
||||
} from "react"
|
||||
import { cn } from "@/lib/utils"
|
||||
|
||||
export type TextShimmerProps = {
|
||||
children: string
|
||||
as?: ElementType
|
||||
className?: string
|
||||
duration?: number
|
||||
spread?: number
|
||||
}
|
||||
|
||||
const ShimmerComponent = ({
|
||||
children,
|
||||
as: Component = "p",
|
||||
className,
|
||||
duration = 2,
|
||||
spread = 2,
|
||||
}: TextShimmerProps) => {
|
||||
const MotionComponent = motion.create(
|
||||
Component as keyof JSX.IntrinsicElements,
|
||||
)
|
||||
|
||||
const dynamicSpread = useMemo(
|
||||
() => (children?.length ?? 0) * spread,
|
||||
[children, spread],
|
||||
)
|
||||
|
||||
return (
|
||||
<MotionComponent
|
||||
animate={{ backgroundPosition: "0% center" }}
|
||||
className={cn(
|
||||
"relative inline-block bg-[length:250%_100%,auto] bg-clip-text text-transparent",
|
||||
"[--bg:linear-gradient(90deg,#0000_calc(50%-var(--spread)),var(--color-background),#0000_calc(50%+var(--spread)))] [background-repeat:no-repeat,padding-box]",
|
||||
className,
|
||||
)}
|
||||
initial={{ backgroundPosition: "100% center" }}
|
||||
style={
|
||||
{
|
||||
"--spread": `${dynamicSpread}px`,
|
||||
backgroundImage:
|
||||
"var(--bg), linear-gradient(var(--color-muted-foreground), var(--color-muted-foreground))",
|
||||
} as CSSProperties
|
||||
}
|
||||
transition={{
|
||||
repeat: Number.POSITIVE_INFINITY,
|
||||
duration,
|
||||
ease: "linear",
|
||||
}}
|
||||
>
|
||||
{children}
|
||||
</MotionComponent>
|
||||
)
|
||||
}
|
||||
|
||||
export const Shimmer = memo(ShimmerComponent)
|
||||
@@ -1,52 +1,28 @@
|
||||
"use client"
|
||||
|
||||
import { Cloud, FileText, GitBranch, Palette, Zap } from "lucide-react"
|
||||
import { Cloud, GitBranch, Palette, Zap } from "lucide-react"
|
||||
|
||||
interface ExampleCardProps {
|
||||
icon: React.ReactNode
|
||||
title: string
|
||||
description: string
|
||||
onClick: () => void
|
||||
isNew?: boolean
|
||||
}
|
||||
|
||||
function ExampleCard({
|
||||
icon,
|
||||
title,
|
||||
description,
|
||||
onClick,
|
||||
isNew,
|
||||
}: ExampleCardProps) {
|
||||
function ExampleCard({ icon, title, description, onClick }: ExampleCardProps) {
|
||||
return (
|
||||
<button
|
||||
onClick={onClick}
|
||||
className={`group w-full text-left p-4 rounded-xl border bg-card hover:bg-accent/50 hover:border-primary/30 transition-all duration-200 hover:shadow-sm ${
|
||||
isNew
|
||||
? "border-primary/40 ring-1 ring-primary/20"
|
||||
: "border-border/60"
|
||||
}`}
|
||||
className="group w-full text-left p-4 rounded-xl border border-border/60 bg-card hover:bg-accent/50 hover:border-primary/30 transition-all duration-200 hover:shadow-sm"
|
||||
>
|
||||
<div className="flex items-start gap-3">
|
||||
<div
|
||||
className={`w-9 h-9 rounded-lg flex items-center justify-center shrink-0 transition-colors ${
|
||||
isNew
|
||||
? "bg-primary/20 group-hover:bg-primary/25"
|
||||
: "bg-primary/10 group-hover:bg-primary/15"
|
||||
}`}
|
||||
>
|
||||
<div className="w-9 h-9 rounded-lg bg-primary/10 flex items-center justify-center shrink-0 group-hover:bg-primary/15 transition-colors">
|
||||
{icon}
|
||||
</div>
|
||||
<div className="min-w-0">
|
||||
<div className="flex items-center gap-2">
|
||||
<h3 className="text-sm font-medium text-foreground group-hover:text-primary transition-colors">
|
||||
{title}
|
||||
</h3>
|
||||
{isNew && (
|
||||
<span className="px-1.5 py-0.5 text-[10px] font-semibold bg-primary text-primary-foreground rounded">
|
||||
NEW
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
<h3 className="text-sm font-medium text-foreground group-hover:text-primary transition-colors">
|
||||
{title}
|
||||
</h3>
|
||||
<p className="text-xs text-muted-foreground mt-0.5 line-clamp-2">
|
||||
{description}
|
||||
</p>
|
||||
@@ -91,21 +67,6 @@ export default function ExamplePanel({
|
||||
}
|
||||
}
|
||||
|
||||
const handlePdfExample = async () => {
|
||||
setInput("Summarize this paper as a diagram")
|
||||
|
||||
try {
|
||||
const response = await fetch("/chain-of-thought.txt")
|
||||
const blob = await response.blob()
|
||||
const file = new File([blob], "chain-of-thought.txt", {
|
||||
type: "text/plain",
|
||||
})
|
||||
setFiles([file])
|
||||
} catch (error) {
|
||||
console.error("Error loading text file:", error)
|
||||
}
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="py-6 px-2 animate-fade-in">
|
||||
{/* Welcome section */}
|
||||
@@ -126,14 +87,6 @@ export default function ExamplePanel({
|
||||
</p>
|
||||
|
||||
<div className="grid gap-2">
|
||||
<ExampleCard
|
||||
icon={<FileText className="w-4 h-4 text-primary" />}
|
||||
title="Paper to Diagram"
|
||||
description="Upload .pdf, .txt, .md, .json, .csv, .py, .js, .ts and more"
|
||||
onClick={handlePdfExample}
|
||||
isNew
|
||||
/>
|
||||
|
||||
<ExampleCard
|
||||
icon={<Zap className="w-4 h-4 text-primary" />}
|
||||
title="Animated Diagram"
|
||||
|
||||
@@ -4,7 +4,9 @@ import {
|
||||
Download,
|
||||
History,
|
||||
Image as ImageIcon,
|
||||
LayoutGrid,
|
||||
Loader2,
|
||||
PenTool,
|
||||
Send,
|
||||
Trash2,
|
||||
} from "lucide-react"
|
||||
@@ -17,18 +19,21 @@ import { HistoryDialog } from "@/components/history-dialog"
|
||||
import { ResetWarningModal } from "@/components/reset-warning-modal"
|
||||
import { SaveDialog } from "@/components/save-dialog"
|
||||
import { Button } from "@/components/ui/button"
|
||||
import {
|
||||
Dialog,
|
||||
DialogContent,
|
||||
DialogDescription,
|
||||
DialogFooter,
|
||||
DialogHeader,
|
||||
DialogTitle,
|
||||
} from "@/components/ui/dialog"
|
||||
import { Textarea } from "@/components/ui/textarea"
|
||||
import { useDiagram } from "@/contexts/diagram-context"
|
||||
import { isPdfFile, isTextFile } from "@/lib/pdf-utils"
|
||||
import { FilePreviewList } from "./file-preview-list"
|
||||
|
||||
const MAX_IMAGE_SIZE = 2 * 1024 * 1024 // 2MB
|
||||
const MAX_FILE_SIZE = 2 * 1024 * 1024 // 2MB
|
||||
const MAX_FILES = 5
|
||||
|
||||
function isValidFileType(file: File): boolean {
|
||||
return file.type.startsWith("image/") || isPdfFile(file) || isTextFile(file)
|
||||
}
|
||||
|
||||
function formatFileSize(bytes: number): string {
|
||||
const mb = bytes / 1024 / 1024
|
||||
if (mb < 0.01) return `${(bytes / 1024).toFixed(0)}KB`
|
||||
@@ -68,16 +73,9 @@ function validateFiles(
|
||||
errors.push(`Only ${availableSlots} more file(s) allowed`)
|
||||
break
|
||||
}
|
||||
if (!isValidFileType(file)) {
|
||||
errors.push(`"${file.name}" is not a supported file type`)
|
||||
continue
|
||||
}
|
||||
// Only check size for images (PDFs/text files are extracted client-side, so file size doesn't matter)
|
||||
const isExtractedFile = isPdfFile(file) || isTextFile(file)
|
||||
if (!isExtractedFile && file.size > MAX_IMAGE_SIZE) {
|
||||
const maxSizeMB = MAX_IMAGE_SIZE / 1024 / 1024
|
||||
if (file.size > MAX_FILE_SIZE) {
|
||||
errors.push(
|
||||
`"${file.name}" is ${formatFileSize(file.size)} (exceeds ${maxSizeMB}MB)`,
|
||||
`"${file.name}" is ${formatFileSize(file.size)} (exceeds 2MB)`,
|
||||
)
|
||||
} else {
|
||||
validFiles.push(file)
|
||||
@@ -121,14 +119,12 @@ interface ChatInputProps {
|
||||
onClearChat: () => void
|
||||
files?: File[]
|
||||
onFileChange?: (files: File[]) => void
|
||||
pdfData?: Map<
|
||||
File,
|
||||
{ text: string; charCount: number; isExtracting: boolean }
|
||||
>
|
||||
showHistory?: boolean
|
||||
onToggleHistory?: (show: boolean) => void
|
||||
sessionId?: string
|
||||
error?: Error | null
|
||||
drawioUi?: "min" | "sketch"
|
||||
onToggleDrawioUi?: () => void
|
||||
}
|
||||
|
||||
export function ChatInput({
|
||||
@@ -139,11 +135,12 @@ export function ChatInput({
|
||||
onClearChat,
|
||||
files = [],
|
||||
onFileChange = () => {},
|
||||
pdfData = new Map(),
|
||||
showHistory = false,
|
||||
onToggleHistory = () => {},
|
||||
sessionId,
|
||||
error = null,
|
||||
drawioUi = "min",
|
||||
onToggleDrawioUi = () => {},
|
||||
}: ChatInputProps) {
|
||||
const { diagramHistory, saveDiagramToFile } = useDiagram()
|
||||
const textareaRef = useRef<HTMLTextAreaElement>(null)
|
||||
@@ -151,6 +148,7 @@ export function ChatInput({
|
||||
const [isDragging, setIsDragging] = useState(false)
|
||||
const [showClearDialog, setShowClearDialog] = useState(false)
|
||||
const [showSaveDialog, setShowSaveDialog] = useState(false)
|
||||
const [showThemeWarning, setShowThemeWarning] = useState(false)
|
||||
|
||||
// Allow retry when there's an error (even if status is still "streaming" or "submitted")
|
||||
const isDisabled =
|
||||
@@ -262,14 +260,11 @@ export function ChatInput({
|
||||
if (isDisabled) return
|
||||
|
||||
const droppedFiles = e.dataTransfer.files
|
||||
const supportedFiles = Array.from(droppedFiles).filter((file) =>
|
||||
isValidFileType(file),
|
||||
const imageFiles = Array.from(droppedFiles).filter((file) =>
|
||||
file.type.startsWith("image/"),
|
||||
)
|
||||
|
||||
const { validFiles, errors } = validateFiles(
|
||||
supportedFiles,
|
||||
files.length,
|
||||
)
|
||||
const { validFiles, errors } = validateFiles(imageFiles, files.length)
|
||||
showValidationErrors(errors)
|
||||
if (validFiles.length > 0) {
|
||||
onFileChange([...files, ...validFiles])
|
||||
@@ -299,7 +294,6 @@ export function ChatInput({
|
||||
<FilePreviewList
|
||||
files={files}
|
||||
onRemoveFile={handleRemoveFile}
|
||||
pdfData={pdfData}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
@@ -312,7 +306,7 @@ export function ChatInput({
|
||||
onChange={handleChange}
|
||||
onKeyDown={handleKeyDown}
|
||||
onPaste={handlePaste}
|
||||
placeholder="Describe your diagram or upload a file..."
|
||||
placeholder="Describe your diagram or paste an image..."
|
||||
disabled={isDisabled}
|
||||
aria-label="Chat input"
|
||||
className="min-h-[60px] max-h-[200px] resize-none border-0 bg-transparent px-4 py-3 text-sm focus-visible:ring-0 focus-visible:ring-offset-0 placeholder:text-muted-foreground/60"
|
||||
@@ -343,6 +337,60 @@ export function ChatInput({
|
||||
showHistory={showHistory}
|
||||
onToggleHistory={onToggleHistory}
|
||||
/>
|
||||
|
||||
<ButtonWithTooltip
|
||||
type="button"
|
||||
variant="ghost"
|
||||
size="sm"
|
||||
onClick={() => setShowThemeWarning(true)}
|
||||
tooltipContent={
|
||||
drawioUi === "min"
|
||||
? "Switch to Sketch theme"
|
||||
: "Switch to Minimal theme"
|
||||
}
|
||||
className="h-8 w-8 p-0 text-muted-foreground hover:text-foreground"
|
||||
>
|
||||
{drawioUi === "min" ? (
|
||||
<PenTool className="h-4 w-4" />
|
||||
) : (
|
||||
<LayoutGrid className="h-4 w-4" />
|
||||
)}
|
||||
</ButtonWithTooltip>
|
||||
|
||||
<Dialog
|
||||
open={showThemeWarning}
|
||||
onOpenChange={setShowThemeWarning}
|
||||
>
|
||||
<DialogContent>
|
||||
<DialogHeader>
|
||||
<DialogTitle>Switch Theme?</DialogTitle>
|
||||
<DialogDescription>
|
||||
Switching themes will reload the diagram
|
||||
editor and clear any unsaved changes.
|
||||
</DialogDescription>
|
||||
</DialogHeader>
|
||||
<DialogFooter>
|
||||
<Button
|
||||
variant="outline"
|
||||
onClick={() =>
|
||||
setShowThemeWarning(false)
|
||||
}
|
||||
>
|
||||
Cancel
|
||||
</Button>
|
||||
<Button
|
||||
variant="destructive"
|
||||
onClick={() => {
|
||||
onClearChat()
|
||||
onToggleDrawioUi()
|
||||
setShowThemeWarning(false)
|
||||
}}
|
||||
>
|
||||
Switch Theme
|
||||
</Button>
|
||||
</DialogFooter>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
</div>
|
||||
|
||||
{/* Right actions */}
|
||||
@@ -388,7 +436,7 @@ export function ChatInput({
|
||||
size="sm"
|
||||
onClick={triggerFileInput}
|
||||
disabled={isDisabled}
|
||||
tooltipContent="Upload file (image, PDF, text)"
|
||||
tooltipContent="Upload image"
|
||||
className="h-8 w-8 p-0 text-muted-foreground hover:text-foreground"
|
||||
>
|
||||
<ImageIcon className="h-4 w-4" />
|
||||
@@ -399,7 +447,7 @@ export function ChatInput({
|
||||
ref={fileInputRef}
|
||||
className="hidden"
|
||||
onChange={handleFileChange}
|
||||
accept="image/*,.pdf,application/pdf,text/*,.md,.markdown,.json,.csv,.xml,.yaml,.yml,.toml"
|
||||
accept="image/*"
|
||||
multiple
|
||||
disabled={isDisabled}
|
||||
/>
|
||||
|
||||
@@ -8,8 +8,6 @@ import {
|
||||
ChevronUp,
|
||||
Copy,
|
||||
Cpu,
|
||||
FileCode,
|
||||
FileText,
|
||||
Minus,
|
||||
Pencil,
|
||||
Plus,
|
||||
@@ -19,17 +17,14 @@ import {
|
||||
X,
|
||||
} from "lucide-react"
|
||||
import Image from "next/image"
|
||||
import type { MutableRefObject } from "react"
|
||||
import { useCallback, useEffect, useRef, useState } from "react"
|
||||
import ReactMarkdown from "react-markdown"
|
||||
import { toast } from "sonner"
|
||||
import {
|
||||
Reasoning,
|
||||
ReasoningContent,
|
||||
ReasoningTrigger,
|
||||
} from "@/components/ai-elements/reasoning"
|
||||
import { ScrollArea } from "@/components/ui/scroll-area"
|
||||
import { convertToLegalXml, replaceNodes, validateAndFixXml } from "@/lib/utils"
|
||||
import {
|
||||
convertToLegalXml,
|
||||
replaceNodes,
|
||||
validateMxCellStructure,
|
||||
} from "@/lib/utils"
|
||||
import ExamplePanel from "./chat-example-panel"
|
||||
import { CodeBlock } from "./code-block"
|
||||
|
||||
@@ -94,59 +89,6 @@ function EditDiffDisplay({ edits }: { edits: EditPair[] }) {
|
||||
|
||||
import { useDiagram } from "@/contexts/diagram-context"
|
||||
|
||||
// Helper to split text content into regular text and file sections (PDF or text files)
|
||||
interface TextSection {
|
||||
type: "text" | "file"
|
||||
content: string
|
||||
filename?: string
|
||||
charCount?: number
|
||||
fileType?: "pdf" | "text"
|
||||
}
|
||||
|
||||
function splitTextIntoFileSections(text: string): TextSection[] {
|
||||
const sections: TextSection[] = []
|
||||
// Match [PDF: filename] or [File: filename] patterns
|
||||
const filePattern =
|
||||
/\[(PDF|File):\s*([^\]]+)\]\n([\s\S]*?)(?=\n\n\[(PDF|File):|$)/g
|
||||
let lastIndex = 0
|
||||
let match
|
||||
|
||||
while ((match = filePattern.exec(text)) !== null) {
|
||||
// Add text before this file section
|
||||
const beforeText = text.slice(lastIndex, match.index).trim()
|
||||
if (beforeText) {
|
||||
sections.push({ type: "text", content: beforeText })
|
||||
}
|
||||
|
||||
// Add file section
|
||||
const fileType = match[1].toLowerCase() === "pdf" ? "pdf" : "text"
|
||||
const filename = match[2].trim()
|
||||
const fileContent = match[3].trim()
|
||||
sections.push({
|
||||
type: "file",
|
||||
content: fileContent,
|
||||
filename,
|
||||
charCount: fileContent.length,
|
||||
fileType,
|
||||
})
|
||||
|
||||
lastIndex = match.index + match[0].length
|
||||
}
|
||||
|
||||
// Add remaining text after last file section
|
||||
const remainingText = text.slice(lastIndex).trim()
|
||||
if (remainingText) {
|
||||
sections.push({ type: "text", content: remainingText })
|
||||
}
|
||||
|
||||
// If no file sections found, return original text
|
||||
if (sections.length === 0) {
|
||||
sections.push({ type: "text", content: text })
|
||||
}
|
||||
|
||||
return sections
|
||||
}
|
||||
|
||||
const getMessageTextContent = (message: UIMessage): string => {
|
||||
if (!message.parts) return ""
|
||||
return message.parts
|
||||
@@ -155,39 +97,27 @@ const getMessageTextContent = (message: UIMessage): string => {
|
||||
.join("\n")
|
||||
}
|
||||
|
||||
// Get only the user's original text, excluding appended file content
|
||||
const getUserOriginalText = (message: UIMessage): string => {
|
||||
const fullText = getMessageTextContent(message)
|
||||
// Strip out [PDF: ...] and [File: ...] sections that were appended
|
||||
const filePattern = /\n\n\[(PDF|File):\s*[^\]]+\]\n[\s\S]*$/
|
||||
return fullText.replace(filePattern, "").trim()
|
||||
}
|
||||
|
||||
interface ChatMessageDisplayProps {
|
||||
messages: UIMessage[]
|
||||
setInput: (input: string) => void
|
||||
setFiles: (files: File[]) => void
|
||||
processedToolCallsRef: MutableRefObject<Set<string>>
|
||||
sessionId?: string
|
||||
onRegenerate?: (messageIndex: number) => void
|
||||
onEditMessage?: (messageIndex: number, newText: string) => void
|
||||
status?: "streaming" | "submitted" | "idle" | "error" | "ready"
|
||||
}
|
||||
|
||||
export function ChatMessageDisplay({
|
||||
messages,
|
||||
setInput,
|
||||
setFiles,
|
||||
processedToolCallsRef,
|
||||
sessionId,
|
||||
onRegenerate,
|
||||
onEditMessage,
|
||||
status = "idle",
|
||||
}: ChatMessageDisplayProps) {
|
||||
const { chartXML, loadDiagram: onDisplayChart } = useDiagram()
|
||||
const messagesEndRef = useRef<HTMLDivElement>(null)
|
||||
const previousXML = useRef<string>("")
|
||||
const processedToolCalls = processedToolCallsRef
|
||||
const processedToolCalls = useRef<Set<string>>(new Set())
|
||||
const [expandedTools, setExpandedTools] = useState<Record<string, boolean>>(
|
||||
{},
|
||||
)
|
||||
@@ -201,44 +131,16 @@ export function ChatMessageDisplay({
|
||||
)
|
||||
const editTextareaRef = useRef<HTMLTextAreaElement>(null)
|
||||
const [editText, setEditText] = useState<string>("")
|
||||
// Track which PDF sections are expanded (key: messageId-sectionIndex)
|
||||
const [expandedPdfSections, setExpandedPdfSections] = useState<
|
||||
Record<string, boolean>
|
||||
>({})
|
||||
|
||||
const copyMessageToClipboard = async (messageId: string, text: string) => {
|
||||
try {
|
||||
await navigator.clipboard.writeText(text)
|
||||
|
||||
setCopiedMessageId(messageId)
|
||||
setTimeout(() => setCopiedMessageId(null), 2000)
|
||||
} catch (err) {
|
||||
// Fallback for non-secure contexts (HTTP) or permission denied
|
||||
const textarea = document.createElement("textarea")
|
||||
textarea.value = text
|
||||
textarea.style.position = "fixed"
|
||||
textarea.style.left = "-9999px"
|
||||
textarea.style.opacity = "0"
|
||||
document.body.appendChild(textarea)
|
||||
|
||||
try {
|
||||
textarea.select()
|
||||
const success = document.execCommand("copy")
|
||||
if (!success) {
|
||||
throw new Error("Copy command failed")
|
||||
}
|
||||
setCopiedMessageId(messageId)
|
||||
setTimeout(() => setCopiedMessageId(null), 2000)
|
||||
} catch (fallbackErr) {
|
||||
console.error("Failed to copy message:", fallbackErr)
|
||||
toast.error(
|
||||
"Failed to copy message. Please copy manually or check clipboard permissions.",
|
||||
)
|
||||
setCopyFailedMessageId(messageId)
|
||||
setTimeout(() => setCopyFailedMessageId(null), 2000)
|
||||
} finally {
|
||||
document.body.removeChild(textarea)
|
||||
}
|
||||
console.error("Failed to copy message:", err)
|
||||
setCopyFailedMessageId(messageId)
|
||||
setTimeout(() => setCopyFailedMessageId(null), 2000)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -266,85 +168,30 @@ export function ChatMessageDisplay({
|
||||
}),
|
||||
})
|
||||
} catch (error) {
|
||||
console.error("Failed to log feedback:", error)
|
||||
toast.error("Failed to record your feedback. Please try again.")
|
||||
// Revert optimistic UI update
|
||||
setFeedback((prev) => {
|
||||
const next = { ...prev }
|
||||
delete next[messageId]
|
||||
return next
|
||||
})
|
||||
console.warn("Failed to log feedback:", error)
|
||||
}
|
||||
}
|
||||
|
||||
const handleDisplayChart = useCallback(
|
||||
(xml: string, showToast = false) => {
|
||||
(xml: string) => {
|
||||
const currentXml = xml || ""
|
||||
const convertedXml = convertToLegalXml(currentXml)
|
||||
if (convertedXml !== previousXML.current) {
|
||||
// Parse and validate XML BEFORE calling replaceNodes
|
||||
const parser = new DOMParser()
|
||||
const testDoc = parser.parseFromString(convertedXml, "text/xml")
|
||||
const parseError = testDoc.querySelector("parsererror")
|
||||
// If chartXML is empty, use the converted XML directly
|
||||
const replacedXML = chartXML
|
||||
? replaceNodes(chartXML, convertedXml)
|
||||
: convertedXml
|
||||
|
||||
if (parseError) {
|
||||
console.error(
|
||||
"[ChatMessageDisplay] Malformed XML detected - skipping update",
|
||||
const validationError = validateMxCellStructure(replacedXML)
|
||||
if (!validationError) {
|
||||
previousXML.current = convertedXml
|
||||
// Skip validation in loadDiagram since we already validated above
|
||||
onDisplayChart(replacedXML, true)
|
||||
} else {
|
||||
console.log(
|
||||
"[ChatMessageDisplay] XML validation failed:",
|
||||
validationError,
|
||||
)
|
||||
// Only show toast if this is the final XML (not during streaming)
|
||||
if (showToast) {
|
||||
toast.error(
|
||||
"AI generated invalid diagram XML. Please try regenerating.",
|
||||
)
|
||||
}
|
||||
return // Skip this update
|
||||
}
|
||||
|
||||
try {
|
||||
// If chartXML is empty, create a default mxfile structure to use with replaceNodes
|
||||
// This ensures the XML is properly wrapped in mxfile/diagram/mxGraphModel format
|
||||
const baseXML =
|
||||
chartXML ||
|
||||
`<mxfile><diagram name="Page-1" id="page-1"><mxGraphModel><root><mxCell id="0"/><mxCell id="1" parent="0"/></root></mxGraphModel></diagram></mxfile>`
|
||||
const replacedXML = replaceNodes(baseXML, convertedXml)
|
||||
|
||||
// Validate and auto-fix the XML
|
||||
const validation = validateAndFixXml(replacedXML)
|
||||
if (validation.valid) {
|
||||
previousXML.current = convertedXml
|
||||
// Use fixed XML if available, otherwise use original
|
||||
const xmlToLoad = validation.fixed || replacedXML
|
||||
if (validation.fixes.length > 0) {
|
||||
console.log(
|
||||
"[ChatMessageDisplay] Auto-fixed XML issues:",
|
||||
validation.fixes,
|
||||
)
|
||||
}
|
||||
// Skip validation in loadDiagram since we already validated above
|
||||
onDisplayChart(xmlToLoad, true)
|
||||
} else {
|
||||
console.error(
|
||||
"[ChatMessageDisplay] XML validation failed:",
|
||||
validation.error,
|
||||
)
|
||||
// Only show toast if this is the final XML (not during streaming)
|
||||
if (showToast) {
|
||||
toast.error(
|
||||
"Diagram validation failed. Please try regenerating.",
|
||||
)
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(
|
||||
"[ChatMessageDisplay] Error processing XML:",
|
||||
error,
|
||||
)
|
||||
// Only show toast if this is the final XML (not during streaming)
|
||||
if (showToast) {
|
||||
toast.error(
|
||||
"Failed to process diagram. Please try regenerating.",
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
@@ -387,14 +234,12 @@ export function ChatMessageDisplay({
|
||||
state === "input-streaming" ||
|
||||
state === "input-available"
|
||||
) {
|
||||
// During streaming, don't show toast (XML may be incomplete)
|
||||
handleDisplayChart(xml, false)
|
||||
handleDisplayChart(xml)
|
||||
} else if (
|
||||
state === "output-available" &&
|
||||
!processedToolCalls.current.has(toolCallId)
|
||||
) {
|
||||
// Show toast only if final XML is malformed
|
||||
handleDisplayChart(xml, true)
|
||||
handleDisplayChart(xml)
|
||||
processedToolCalls.current.add(toolCallId)
|
||||
}
|
||||
}
|
||||
@@ -543,9 +388,7 @@ export function ChatMessageDisplay({
|
||||
message.id,
|
||||
)
|
||||
setEditText(
|
||||
getUserOriginalText(
|
||||
message,
|
||||
),
|
||||
userMessageText,
|
||||
)
|
||||
}}
|
||||
className="p-1.5 rounded-lg text-muted-foreground/60 hover:text-muted-foreground hover:bg-muted transition-colors"
|
||||
@@ -586,52 +429,6 @@ export function ChatMessageDisplay({
|
||||
</div>
|
||||
)}
|
||||
<div className="max-w-[85%] min-w-0">
|
||||
{/* Reasoning blocks - displayed first for assistant messages */}
|
||||
{message.role === "assistant" &&
|
||||
message.parts?.map(
|
||||
(part, partIndex) => {
|
||||
if (part.type === "reasoning") {
|
||||
const reasoningPart =
|
||||
part as {
|
||||
type: "reasoning"
|
||||
text: string
|
||||
}
|
||||
const isLastPart =
|
||||
partIndex ===
|
||||
(message.parts
|
||||
?.length ?? 0) -
|
||||
1
|
||||
const isLastMessage =
|
||||
message.id ===
|
||||
messages[
|
||||
messages.length - 1
|
||||
]?.id
|
||||
const isStreamingReasoning =
|
||||
status ===
|
||||
"streaming" &&
|
||||
isLastPart &&
|
||||
isLastMessage
|
||||
|
||||
return (
|
||||
<Reasoning
|
||||
key={`${message.id}-reasoning-${partIndex}`}
|
||||
className="w-full"
|
||||
isStreaming={
|
||||
isStreamingReasoning
|
||||
}
|
||||
>
|
||||
<ReasoningTrigger />
|
||||
<ReasoningContent>
|
||||
{
|
||||
reasoningPart.text
|
||||
}
|
||||
</ReasoningContent>
|
||||
</Reasoning>
|
||||
)
|
||||
}
|
||||
return null
|
||||
},
|
||||
)}
|
||||
{/* Edit mode for user messages */}
|
||||
{isEditing && message.role === "user" ? (
|
||||
<div className="flex flex-col gap-2">
|
||||
@@ -807,9 +604,7 @@ export function ChatMessageDisplay({
|
||||
message.id,
|
||||
)
|
||||
setEditText(
|
||||
getUserOriginalText(
|
||||
message,
|
||||
),
|
||||
userMessageText,
|
||||
)
|
||||
}
|
||||
}}
|
||||
@@ -829,9 +624,7 @@ export function ChatMessageDisplay({
|
||||
message.id,
|
||||
)
|
||||
setEditText(
|
||||
getUserOriginalText(
|
||||
message,
|
||||
),
|
||||
userMessageText,
|
||||
)
|
||||
}
|
||||
}}
|
||||
@@ -853,126 +646,26 @@ export function ChatMessageDisplay({
|
||||
part.type ===
|
||||
"text"
|
||||
) {
|
||||
const textContent =
|
||||
(
|
||||
part as {
|
||||
text: string
|
||||
}
|
||||
)
|
||||
.text
|
||||
const sections =
|
||||
splitTextIntoFileSections(
|
||||
textContent,
|
||||
)
|
||||
return (
|
||||
<div
|
||||
key={`${message.id}-text-${group.startIndex}-${partIndex}`}
|
||||
className="space-y-2"
|
||||
className={`prose prose-sm max-w-none break-words [&>*:first-child]:mt-0 [&>*:last-child]:mb-0 ${
|
||||
message.role ===
|
||||
"user"
|
||||
? "[&_*]:!text-primary-foreground prose-code:bg-white/20"
|
||||
: "dark:prose-invert"
|
||||
}`}
|
||||
>
|
||||
{sections.map(
|
||||
(
|
||||
section,
|
||||
sectionIndex,
|
||||
) => {
|
||||
if (
|
||||
section.type ===
|
||||
"file"
|
||||
) {
|
||||
const pdfKey = `${message.id}-file-${partIndex}-${sectionIndex}`
|
||||
const isExpanded =
|
||||
expandedPdfSections[
|
||||
pdfKey
|
||||
] ??
|
||||
false
|
||||
const charDisplay =
|
||||
section.charCount &&
|
||||
section.charCount >=
|
||||
1000
|
||||
? `${(section.charCount / 1000).toFixed(1)}k`
|
||||
: section.charCount
|
||||
return (
|
||||
<div
|
||||
key={
|
||||
pdfKey
|
||||
}
|
||||
className="rounded-lg border border-border/60 bg-muted/30 overflow-hidden"
|
||||
>
|
||||
<button
|
||||
type="button"
|
||||
onClick={(
|
||||
e,
|
||||
) => {
|
||||
e.stopPropagation()
|
||||
setExpandedPdfSections(
|
||||
(
|
||||
prev,
|
||||
) => ({
|
||||
...prev,
|
||||
[pdfKey]:
|
||||
!isExpanded,
|
||||
}),
|
||||
)
|
||||
}}
|
||||
className="w-full flex items-center justify-between px-3 py-2 hover:bg-muted/50 transition-colors"
|
||||
>
|
||||
<div className="flex items-center gap-2">
|
||||
{section.fileType ===
|
||||
"pdf" ? (
|
||||
<FileText className="h-4 w-4 text-red-500" />
|
||||
) : (
|
||||
<FileCode className="h-4 w-4 text-blue-500" />
|
||||
)}
|
||||
<span className="text-xs font-medium">
|
||||
{
|
||||
section.filename
|
||||
}
|
||||
</span>
|
||||
<span className="text-[10px] text-muted-foreground">
|
||||
(
|
||||
{
|
||||
charDisplay
|
||||
}{" "}
|
||||
chars)
|
||||
</span>
|
||||
</div>
|
||||
{isExpanded ? (
|
||||
<ChevronUp className="h-4 w-4 text-muted-foreground" />
|
||||
) : (
|
||||
<ChevronDown className="h-4 w-4 text-muted-foreground" />
|
||||
)}
|
||||
</button>
|
||||
{isExpanded && (
|
||||
<div className="px-3 py-2 border-t border-border/40 max-h-48 overflow-y-auto bg-muted/30">
|
||||
<pre className="text-xs whitespace-pre-wrap text-foreground/80">
|
||||
{
|
||||
section.content
|
||||
}
|
||||
</pre>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)
|
||||
}
|
||||
// Regular text section
|
||||
return (
|
||||
<div
|
||||
key={`${message.id}-textsection-${partIndex}-${sectionIndex}`}
|
||||
className={`prose prose-sm max-w-none break-words [&>*:first-child]:mt-0 [&>*:last-child]:mb-0 ${
|
||||
message.role ===
|
||||
"user"
|
||||
? "[&_*]:!text-primary-foreground prose-code:bg-white/20"
|
||||
: "dark:prose-invert"
|
||||
}`}
|
||||
>
|
||||
<ReactMarkdown>
|
||||
{
|
||||
section.content
|
||||
}
|
||||
</ReactMarkdown>
|
||||
</div>
|
||||
<ReactMarkdown>
|
||||
{
|
||||
(
|
||||
part as {
|
||||
text: string
|
||||
}
|
||||
)
|
||||
},
|
||||
)}
|
||||
.text
|
||||
}
|
||||
</ReactMarkdown>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
@@ -1,10 +1,12 @@
|
||||
"use client"
|
||||
|
||||
import { useChat } from "@ai-sdk/react"
|
||||
import { DefaultChatTransport } from "ai"
|
||||
import {
|
||||
AlertTriangle,
|
||||
MessageSquarePlus,
|
||||
DefaultChatTransport,
|
||||
lastAssistantMessageIsCompleteWithToolCalls,
|
||||
} from "ai"
|
||||
import {
|
||||
CheckCircle,
|
||||
PanelRightClose,
|
||||
PanelRightOpen,
|
||||
Settings,
|
||||
@@ -15,85 +17,39 @@ import type React from "react"
|
||||
import { useCallback, useEffect, useRef, useState } from "react"
|
||||
import { flushSync } from "react-dom"
|
||||
import { FaGithub } from "react-icons/fa"
|
||||
import { Toaster, toast } from "sonner"
|
||||
import { Toaster } from "sonner"
|
||||
import { ButtonWithTooltip } from "@/components/button-with-tooltip"
|
||||
import { ChatInput } from "@/components/chat-input"
|
||||
import { ResetWarningModal } from "@/components/reset-warning-modal"
|
||||
import { SettingsDialog } from "@/components/settings-dialog"
|
||||
import { useDiagram } from "@/contexts/diagram-context"
|
||||
import { getAIConfig } from "@/lib/ai-config"
|
||||
import { findCachedResponse } from "@/lib/cached-responses"
|
||||
import { isPdfFile, isTextFile } from "@/lib/pdf-utils"
|
||||
import { type FileData, useFileProcessor } from "@/lib/use-file-processor"
|
||||
import { useQuotaManager } from "@/lib/use-quota-manager"
|
||||
import { formatXML, wrapWithMxFile } from "@/lib/utils"
|
||||
import { ChatMessageDisplay } from "./chat-message-display"
|
||||
import {
|
||||
SettingsDialog,
|
||||
STORAGE_ACCESS_CODE_KEY,
|
||||
} from "@/components/settings-dialog"
|
||||
|
||||
// localStorage keys for persistence
|
||||
const STORAGE_MESSAGES_KEY = "next-ai-draw-io-messages"
|
||||
const STORAGE_XML_SNAPSHOTS_KEY = "next-ai-draw-io-xml-snapshots"
|
||||
const STORAGE_SESSION_ID_KEY = "next-ai-draw-io-session-id"
|
||||
export const STORAGE_DIAGRAM_XML_KEY = "next-ai-draw-io-diagram-xml"
|
||||
const STORAGE_DIAGRAM_XML_KEY = "next-ai-draw-io-diagram-xml"
|
||||
|
||||
// Type for message parts (tool calls and their states)
|
||||
interface MessagePart {
|
||||
type: string
|
||||
state?: string
|
||||
toolName?: string
|
||||
[key: string]: unknown
|
||||
}
|
||||
|
||||
interface ChatMessage {
|
||||
role: string
|
||||
parts?: MessagePart[]
|
||||
[key: string]: unknown
|
||||
}
|
||||
import { useDiagram } from "@/contexts/diagram-context"
|
||||
import { findCachedResponse } from "@/lib/cached-responses"
|
||||
import { formatXML } from "@/lib/utils"
|
||||
import { ChatMessageDisplay } from "./chat-message-display"
|
||||
|
||||
interface ChatPanelProps {
|
||||
isVisible: boolean
|
||||
onToggleVisibility: () => void
|
||||
drawioUi: "min" | "sketch"
|
||||
onToggleDrawioUi: () => void
|
||||
darkMode: boolean
|
||||
onToggleDarkMode: () => void
|
||||
isMobile?: boolean
|
||||
onCloseProtectionChange?: (enabled: boolean) => void
|
||||
}
|
||||
|
||||
// Constants for tool states
|
||||
const TOOL_ERROR_STATE = "output-error" as const
|
||||
const DEBUG = process.env.NODE_ENV === "development"
|
||||
const MAX_AUTO_RETRY_COUNT = 1
|
||||
|
||||
/**
|
||||
* Check if auto-resubmit should happen based on tool errors.
|
||||
* Does NOT handle retry count or quota - those are handled by the caller.
|
||||
*/
|
||||
function hasToolErrors(messages: ChatMessage[]): boolean {
|
||||
const lastMessage = messages[messages.length - 1]
|
||||
if (!lastMessage || lastMessage.role !== "assistant") {
|
||||
return false
|
||||
}
|
||||
|
||||
const toolParts =
|
||||
(lastMessage.parts as MessagePart[] | undefined)?.filter((part) =>
|
||||
part.type?.startsWith("tool-"),
|
||||
) || []
|
||||
|
||||
if (toolParts.length === 0) {
|
||||
return false
|
||||
}
|
||||
|
||||
return toolParts.some((part) => part.state === TOOL_ERROR_STATE)
|
||||
}
|
||||
|
||||
export default function ChatPanel({
|
||||
isVisible,
|
||||
onToggleVisibility,
|
||||
drawioUi,
|
||||
onToggleDrawioUi,
|
||||
darkMode,
|
||||
onToggleDarkMode,
|
||||
isMobile = false,
|
||||
onCloseProtectionChange,
|
||||
}: ChatPanelProps) {
|
||||
@@ -133,38 +89,20 @@ export default function ChatPanel({
|
||||
])
|
||||
}
|
||||
|
||||
// File processing using extracted hook
|
||||
const { files, pdfData, handleFileChange, setFiles } = useFileProcessor()
|
||||
|
||||
const [files, setFiles] = useState<File[]>([])
|
||||
const [showHistory, setShowHistory] = useState(false)
|
||||
const [showSettingsDialog, setShowSettingsDialog] = useState(false)
|
||||
const [, setAccessCodeRequired] = useState(false)
|
||||
const [input, setInput] = useState("")
|
||||
const [dailyRequestLimit, setDailyRequestLimit] = useState(0)
|
||||
const [dailyTokenLimit, setDailyTokenLimit] = useState(0)
|
||||
const [tpmLimit, setTpmLimit] = useState(0)
|
||||
const [showNewChatDialog, setShowNewChatDialog] = useState(false)
|
||||
|
||||
// Check config on mount
|
||||
// Check if access code is required on mount
|
||||
useEffect(() => {
|
||||
fetch("/api/config")
|
||||
.then((res) => res.json())
|
||||
.then((data) => {
|
||||
setAccessCodeRequired(data.accessCodeRequired)
|
||||
setDailyRequestLimit(data.dailyRequestLimit || 0)
|
||||
setDailyTokenLimit(data.dailyTokenLimit || 0)
|
||||
setTpmLimit(data.tpmLimit || 0)
|
||||
})
|
||||
.then((data) => setAccessCodeRequired(data.accessCodeRequired))
|
||||
.catch(() => setAccessCodeRequired(false))
|
||||
}, [])
|
||||
|
||||
// Quota management using extracted hook
|
||||
const quotaManager = useQuotaManager({
|
||||
dailyRequestLimit,
|
||||
dailyTokenLimit,
|
||||
tpmLimit,
|
||||
})
|
||||
|
||||
// Generate a unique session ID for Langfuse tracing (restore from localStorage if available)
|
||||
const [sessionId, setSessionId] = useState(() => {
|
||||
if (typeof window !== "undefined") {
|
||||
@@ -189,12 +127,6 @@ export default function ChatPanel({
|
||||
// Ref to hold stop function for use in onToolCall (avoids stale closure)
|
||||
const stopRef = useRef<(() => void) | null>(null)
|
||||
|
||||
// Ref to track consecutive auto-retry count (reset on user action)
|
||||
const autoRetryCountRef = useRef(0)
|
||||
|
||||
// Persist processed tool call IDs so collapsing the chat doesn't replay old tool outputs
|
||||
const processedToolCallsRef = useRef<Set<string>>(new Set())
|
||||
|
||||
const {
|
||||
messages,
|
||||
sendMessage,
|
||||
@@ -208,25 +140,11 @@ export default function ChatPanel({
|
||||
api: "/api/chat",
|
||||
}),
|
||||
async onToolCall({ toolCall }) {
|
||||
if (DEBUG) {
|
||||
console.log(
|
||||
`[onToolCall] Tool: ${toolCall.toolName}, CallId: ${toolCall.toolCallId}`,
|
||||
)
|
||||
}
|
||||
|
||||
if (toolCall.toolName === "display_diagram") {
|
||||
const { xml } = toolCall.input as { xml: string }
|
||||
if (DEBUG) {
|
||||
console.log(
|
||||
`[display_diagram] Received XML length: ${xml.length}`,
|
||||
)
|
||||
}
|
||||
|
||||
// Wrap raw XML with full mxfile structure for draw.io
|
||||
const fullXml = wrapWithMxFile(xml)
|
||||
|
||||
// loadDiagram validates and returns error if invalid
|
||||
const validationError = onDisplayChart(fullXml)
|
||||
const validationError = onDisplayChart(xml)
|
||||
|
||||
if (validationError) {
|
||||
console.warn(
|
||||
@@ -234,41 +152,27 @@ export default function ChatPanel({
|
||||
validationError,
|
||||
)
|
||||
// Return error to model - sendAutomaticallyWhen will trigger retry
|
||||
if (DEBUG) {
|
||||
console.log(
|
||||
"[display_diagram] Adding tool output with state: output-error",
|
||||
)
|
||||
}
|
||||
addToolOutput({
|
||||
tool: "display_diagram",
|
||||
toolCallId: toolCall.toolCallId,
|
||||
state: "output-error",
|
||||
errorText: `${validationError}
|
||||
const errorMessage = `${validationError}
|
||||
|
||||
Please fix the XML issues and call display_diagram again with corrected XML.
|
||||
|
||||
Your failed XML:
|
||||
\`\`\`xml
|
||||
${xml}
|
||||
\`\`\``,
|
||||
\`\`\``
|
||||
addToolOutput({
|
||||
tool: "display_diagram",
|
||||
toolCallId: toolCall.toolCallId,
|
||||
state: "output-error",
|
||||
errorText: errorMessage,
|
||||
})
|
||||
} else {
|
||||
// Success - diagram will be rendered by chat-message-display
|
||||
if (DEBUG) {
|
||||
console.log(
|
||||
"[display_diagram] Success! Adding tool output with state: output-available",
|
||||
)
|
||||
}
|
||||
addToolOutput({
|
||||
tool: "display_diagram",
|
||||
toolCallId: toolCall.toolCallId,
|
||||
output: "Successfully displayed the diagram.",
|
||||
})
|
||||
if (DEBUG) {
|
||||
console.log(
|
||||
"[display_diagram] Tool output added. Diagram should be visible now.",
|
||||
)
|
||||
}
|
||||
}
|
||||
} else if (toolCall.toolName === "edit_diagram") {
|
||||
const { edits } = toolCall.input as {
|
||||
@@ -325,7 +229,7 @@ Please fix the edit to avoid structural issues (e.g., duplicate IDs, invalid ref
|
||||
})
|
||||
return
|
||||
}
|
||||
onExport()
|
||||
|
||||
addToolOutput({
|
||||
tool: "edit_diagram",
|
||||
toolCallId: toolCall.toolCallId,
|
||||
@@ -361,28 +265,13 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
console.error("Chat error:", error)
|
||||
}
|
||||
|
||||
// Translate technical errors into user-friendly messages
|
||||
// The server now handles detailed error messages, so we can display them directly.
|
||||
// But we still handle connection/network errors that happen before reaching the server.
|
||||
let friendlyMessage = error.message
|
||||
|
||||
// Simple check for network errors if message is generic
|
||||
if (friendlyMessage === "Failed to fetch") {
|
||||
friendlyMessage = "Network error. Please check your connection."
|
||||
}
|
||||
|
||||
// Translate image not supported error
|
||||
if (friendlyMessage.includes("image content block")) {
|
||||
friendlyMessage = "This model doesn't support image input."
|
||||
}
|
||||
|
||||
// Add system message for error so it can be cleared
|
||||
setMessages((currentMessages) => {
|
||||
const errorMessage = {
|
||||
id: `error-${Date.now()}`,
|
||||
role: "system" as const,
|
||||
content: friendlyMessage,
|
||||
parts: [{ type: "text" as const, text: friendlyMessage }],
|
||||
content: error.message,
|
||||
parts: [{ type: "text" as const, text: error.message }],
|
||||
}
|
||||
return [...currentMessages, errorMessage]
|
||||
})
|
||||
@@ -393,88 +282,9 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
setShowSettingsDialog(true)
|
||||
}
|
||||
},
|
||||
onFinish: ({ message }) => {
|
||||
// Track actual token usage from server metadata
|
||||
const metadata = message?.metadata as
|
||||
| Record<string, unknown>
|
||||
| undefined
|
||||
if (metadata) {
|
||||
// Use Number.isFinite to guard against NaN (typeof NaN === 'number' is true)
|
||||
const inputTokens = Number.isFinite(metadata.inputTokens)
|
||||
? (metadata.inputTokens as number)
|
||||
: 0
|
||||
const outputTokens = Number.isFinite(metadata.outputTokens)
|
||||
? (metadata.outputTokens as number)
|
||||
: 0
|
||||
const actualTokens = inputTokens + outputTokens
|
||||
if (actualTokens > 0) {
|
||||
quotaManager.incrementTokenCount(actualTokens)
|
||||
quotaManager.incrementTPMCount(actualTokens)
|
||||
}
|
||||
}
|
||||
},
|
||||
sendAutomaticallyWhen: ({ messages }) => {
|
||||
const shouldRetry = hasToolErrors(
|
||||
messages as unknown as ChatMessage[],
|
||||
)
|
||||
|
||||
if (!shouldRetry) {
|
||||
// No error, reset retry count
|
||||
autoRetryCountRef.current = 0
|
||||
if (DEBUG) {
|
||||
console.log("[sendAutomaticallyWhen] No errors, stopping")
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Check retry count limit
|
||||
if (autoRetryCountRef.current >= MAX_AUTO_RETRY_COUNT) {
|
||||
if (DEBUG) {
|
||||
console.log(
|
||||
`[sendAutomaticallyWhen] Max retry count (${MAX_AUTO_RETRY_COUNT}) reached, stopping`,
|
||||
)
|
||||
}
|
||||
toast.error(
|
||||
`Auto-retry limit reached (${MAX_AUTO_RETRY_COUNT}). Please try again manually.`,
|
||||
)
|
||||
autoRetryCountRef.current = 0
|
||||
return false
|
||||
}
|
||||
|
||||
// Check quota limits before auto-retry
|
||||
const tokenLimitCheck = quotaManager.checkTokenLimit()
|
||||
if (!tokenLimitCheck.allowed) {
|
||||
if (DEBUG) {
|
||||
console.log(
|
||||
"[sendAutomaticallyWhen] Token limit exceeded, stopping",
|
||||
)
|
||||
}
|
||||
quotaManager.showTokenLimitToast(tokenLimitCheck.used)
|
||||
autoRetryCountRef.current = 0
|
||||
return false
|
||||
}
|
||||
|
||||
const tpmCheck = quotaManager.checkTPMLimit()
|
||||
if (!tpmCheck.allowed) {
|
||||
if (DEBUG) {
|
||||
console.log(
|
||||
"[sendAutomaticallyWhen] TPM limit exceeded, stopping",
|
||||
)
|
||||
}
|
||||
quotaManager.showTPMLimitToast()
|
||||
autoRetryCountRef.current = 0
|
||||
return false
|
||||
}
|
||||
|
||||
// Increment retry count and allow retry
|
||||
autoRetryCountRef.current++
|
||||
if (DEBUG) {
|
||||
console.log(
|
||||
`[sendAutomaticallyWhen] Retrying (${autoRetryCountRef.current}/${MAX_AUTO_RETRY_COUNT})`,
|
||||
)
|
||||
}
|
||||
return true
|
||||
},
|
||||
// Auto-resubmit when all tool results are available (including errors)
|
||||
// This enables the model to retry when a tool returns an error
|
||||
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls,
|
||||
})
|
||||
|
||||
// Update stopRef so onToolCall can access it
|
||||
@@ -520,13 +330,13 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
const hasDiagramRestoredRef = useRef(false)
|
||||
const [canSaveDiagram, setCanSaveDiagram] = useState(false)
|
||||
useEffect(() => {
|
||||
// Reset restore flag when DrawIO is not ready (e.g., theme/UI change remounts it)
|
||||
if (!isDrawioReady) {
|
||||
hasDiagramRestoredRef.current = false
|
||||
setCanSaveDiagram(false)
|
||||
return
|
||||
}
|
||||
if (hasDiagramRestoredRef.current) return
|
||||
console.log(
|
||||
"[ChatPanel] isDrawioReady:",
|
||||
isDrawioReady,
|
||||
"hasDiagramRestored:",
|
||||
hasDiagramRestoredRef.current,
|
||||
)
|
||||
if (!isDrawioReady || hasDiagramRestoredRef.current) return
|
||||
hasDiagramRestoredRef.current = true
|
||||
|
||||
try {
|
||||
@@ -567,14 +377,6 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
}
|
||||
}, [messages])
|
||||
|
||||
// Save diagram XML to localStorage whenever it changes
|
||||
useEffect(() => {
|
||||
if (!canSaveDiagram) return
|
||||
if (chartXML && chartXML.length > 300) {
|
||||
localStorage.setItem(STORAGE_DIAGRAM_XML_KEY, chartXML)
|
||||
}
|
||||
}, [chartXML, canSaveDiagram])
|
||||
|
||||
// Save XML snapshots to localStorage whenever they change
|
||||
const saveXmlSnapshots = useCallback(() => {
|
||||
try {
|
||||
@@ -596,6 +398,20 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
localStorage.setItem(STORAGE_SESSION_ID_KEY, sessionId)
|
||||
}, [sessionId])
|
||||
|
||||
// Save current diagram XML to localStorage whenever it changes
|
||||
// Only save after initial restore is complete and if it's not an empty diagram
|
||||
useEffect(() => {
|
||||
if (!canSaveDiagram) return
|
||||
// Don't save empty diagrams (check for minimal content)
|
||||
if (chartXML && chartXML.length > 300) {
|
||||
console.log(
|
||||
"[ChatPanel] Saving diagram to localStorage, length:",
|
||||
chartXML.length,
|
||||
)
|
||||
localStorage.setItem(STORAGE_DIAGRAM_XML_KEY, chartXML)
|
||||
}
|
||||
}, [chartXML, canSaveDiagram])
|
||||
|
||||
useEffect(() => {
|
||||
if (messagesEndRef.current) {
|
||||
messagesEndRef.current.scrollIntoView({ behavior: "smooth" })
|
||||
@@ -645,19 +461,11 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
// Add user message and fake assistant response to messages
|
||||
// The chat-message-display useEffect will handle displaying the diagram
|
||||
const toolCallId = `cached-${Date.now()}`
|
||||
|
||||
// Build user message text including any file content
|
||||
const userText = await processFilesAndAppendContent(
|
||||
input,
|
||||
files,
|
||||
pdfData,
|
||||
)
|
||||
|
||||
setMessages([
|
||||
{
|
||||
id: `user-${Date.now()}`,
|
||||
role: "user" as const,
|
||||
parts: [{ type: "text" as const, text: userText }],
|
||||
parts: [{ type: "text" as const, text: input }],
|
||||
},
|
||||
{
|
||||
id: `assistant-${Date.now()}`,
|
||||
@@ -687,39 +495,45 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
// This ensures edit_diagram has the correct XML before AI responds
|
||||
chartXMLRef.current = chartXml
|
||||
|
||||
// Build user text by concatenating input with pre-extracted text
|
||||
// (Backend only reads first text part, so we must combine them)
|
||||
const parts: any[] = []
|
||||
const userText = await processFilesAndAppendContent(
|
||||
input,
|
||||
files,
|
||||
pdfData,
|
||||
parts,
|
||||
)
|
||||
const parts: any[] = [{ type: "text", text: input }]
|
||||
|
||||
// Add the combined text as the first part
|
||||
parts.unshift({ type: "text", text: userText })
|
||||
if (files.length > 0) {
|
||||
for (const file of files) {
|
||||
const reader = new FileReader()
|
||||
const dataUrl = await new Promise<string>((resolve) => {
|
||||
reader.onload = () =>
|
||||
resolve(reader.result as string)
|
||||
reader.readAsDataURL(file)
|
||||
})
|
||||
|
||||
// Get previous XML from the last snapshot (before this message)
|
||||
const snapshotKeys = Array.from(
|
||||
xmlSnapshotsRef.current.keys(),
|
||||
).sort((a, b) => b - a)
|
||||
const previousXml =
|
||||
snapshotKeys.length > 0
|
||||
? xmlSnapshotsRef.current.get(snapshotKeys[0]) || ""
|
||||
: ""
|
||||
parts.push({
|
||||
type: "file",
|
||||
url: dataUrl,
|
||||
mediaType: file.type,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Save XML snapshot for this message (will be at index = current messages.length)
|
||||
const messageIndex = messages.length
|
||||
xmlSnapshotsRef.current.set(messageIndex, chartXml)
|
||||
saveXmlSnapshots()
|
||||
|
||||
// Check all quota limits
|
||||
if (!checkAllQuotaLimits()) return
|
||||
const accessCode =
|
||||
localStorage.getItem(STORAGE_ACCESS_CODE_KEY) || ""
|
||||
sendMessage(
|
||||
{ parts },
|
||||
{
|
||||
body: {
|
||||
xml: chartXml,
|
||||
sessionId,
|
||||
},
|
||||
headers: {
|
||||
"x-access-code": accessCode,
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
sendChatMessage(parts, chartXml, previousXml, sessionId)
|
||||
|
||||
// Token count is tracked in onFinish with actual server usage
|
||||
setInput("")
|
||||
setFiles([])
|
||||
} catch (error) {
|
||||
@@ -728,159 +542,14 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
}
|
||||
}
|
||||
|
||||
const handleNewChat = useCallback(() => {
|
||||
setMessages([])
|
||||
clearDiagram()
|
||||
handleFileChange([]) // Use handleFileChange to also clear pdfData
|
||||
const newSessionId = `session-${Date.now()}-${Math.random()
|
||||
.toString(36)
|
||||
.slice(2, 9)}`
|
||||
setSessionId(newSessionId)
|
||||
xmlSnapshotsRef.current.clear()
|
||||
// Clear localStorage with error handling
|
||||
try {
|
||||
localStorage.removeItem(STORAGE_MESSAGES_KEY)
|
||||
localStorage.removeItem(STORAGE_XML_SNAPSHOTS_KEY)
|
||||
localStorage.removeItem(STORAGE_DIAGRAM_XML_KEY)
|
||||
localStorage.setItem(STORAGE_SESSION_ID_KEY, newSessionId)
|
||||
toast.success("Started a fresh chat")
|
||||
} catch (error) {
|
||||
console.error("Failed to clear localStorage:", error)
|
||||
toast.warning(
|
||||
"Chat cleared but browser storage could not be updated",
|
||||
)
|
||||
}
|
||||
|
||||
setShowNewChatDialog(false)
|
||||
}, [clearDiagram, handleFileChange, setMessages, setSessionId])
|
||||
|
||||
const handleInputChange = (
|
||||
e: React.ChangeEvent<HTMLInputElement | HTMLTextAreaElement>,
|
||||
) => {
|
||||
setInput(e.target.value)
|
||||
}
|
||||
|
||||
// Helper functions for message actions (regenerate/edit)
|
||||
// Extract previous XML snapshot before a given message index
|
||||
const getPreviousXml = (beforeIndex: number): string => {
|
||||
const snapshotKeys = Array.from(xmlSnapshotsRef.current.keys())
|
||||
.filter((k) => k < beforeIndex)
|
||||
.sort((a, b) => b - a)
|
||||
return snapshotKeys.length > 0
|
||||
? xmlSnapshotsRef.current.get(snapshotKeys[0]) || ""
|
||||
: ""
|
||||
}
|
||||
|
||||
// Restore diagram from snapshot and update ref
|
||||
const restoreDiagramFromSnapshot = (savedXml: string) => {
|
||||
onDisplayChart(savedXml, true) // Skip validation for trusted snapshots
|
||||
chartXMLRef.current = savedXml
|
||||
}
|
||||
|
||||
// Clean up snapshots after a given message index
|
||||
const cleanupSnapshotsAfter = (messageIndex: number) => {
|
||||
for (const key of xmlSnapshotsRef.current.keys()) {
|
||||
if (key > messageIndex) {
|
||||
xmlSnapshotsRef.current.delete(key)
|
||||
}
|
||||
}
|
||||
saveXmlSnapshots()
|
||||
}
|
||||
|
||||
// Check all quota limits (daily requests, tokens, TPM)
|
||||
const checkAllQuotaLimits = (): boolean => {
|
||||
const limitCheck = quotaManager.checkDailyLimit()
|
||||
if (!limitCheck.allowed) {
|
||||
quotaManager.showQuotaLimitToast()
|
||||
return false
|
||||
}
|
||||
|
||||
const tokenLimitCheck = quotaManager.checkTokenLimit()
|
||||
if (!tokenLimitCheck.allowed) {
|
||||
quotaManager.showTokenLimitToast(tokenLimitCheck.used)
|
||||
return false
|
||||
}
|
||||
|
||||
const tpmCheck = quotaManager.checkTPMLimit()
|
||||
if (!tpmCheck.allowed) {
|
||||
quotaManager.showTPMLimitToast()
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// Send chat message with headers and increment quota
|
||||
const sendChatMessage = (
|
||||
parts: any,
|
||||
xml: string,
|
||||
previousXml: string,
|
||||
sessionId: string,
|
||||
) => {
|
||||
// Reset auto-retry count on user-initiated message
|
||||
autoRetryCountRef.current = 0
|
||||
|
||||
const config = getAIConfig()
|
||||
|
||||
sendMessage(
|
||||
{ parts },
|
||||
{
|
||||
body: { xml, previousXml, sessionId },
|
||||
headers: {
|
||||
"x-access-code": config.accessCode,
|
||||
...(config.aiProvider && {
|
||||
"x-ai-provider": config.aiProvider,
|
||||
...(config.aiBaseUrl && {
|
||||
"x-ai-base-url": config.aiBaseUrl,
|
||||
}),
|
||||
...(config.aiApiKey && {
|
||||
"x-ai-api-key": config.aiApiKey,
|
||||
}),
|
||||
...(config.aiModel && { "x-ai-model": config.aiModel }),
|
||||
}),
|
||||
},
|
||||
},
|
||||
)
|
||||
quotaManager.incrementRequestCount()
|
||||
}
|
||||
|
||||
// Process files and append content to user text (handles PDF, text, and optionally images)
|
||||
const processFilesAndAppendContent = async (
|
||||
baseText: string,
|
||||
files: File[],
|
||||
pdfData: Map<File, FileData>,
|
||||
imageParts?: any[],
|
||||
): Promise<string> => {
|
||||
let userText = baseText
|
||||
|
||||
for (const file of files) {
|
||||
if (isPdfFile(file)) {
|
||||
const extracted = pdfData.get(file)
|
||||
if (extracted?.text) {
|
||||
userText += `\n\n[PDF: ${file.name}]\n${extracted.text}`
|
||||
}
|
||||
} else if (isTextFile(file)) {
|
||||
const extracted = pdfData.get(file)
|
||||
if (extracted?.text) {
|
||||
userText += `\n\n[File: ${file.name}]\n${extracted.text}`
|
||||
}
|
||||
} else if (imageParts) {
|
||||
// Handle as image (only if imageParts array provided)
|
||||
const reader = new FileReader()
|
||||
const dataUrl = await new Promise<string>((resolve) => {
|
||||
reader.onload = () => resolve(reader.result as string)
|
||||
reader.readAsDataURL(file)
|
||||
})
|
||||
|
||||
imageParts.push({
|
||||
type: "file",
|
||||
url: dataUrl,
|
||||
mediaType: file.type,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return userText
|
||||
const handleFileChange = (newFiles: File[]) => {
|
||||
setFiles(newFiles)
|
||||
}
|
||||
|
||||
const handleRegenerate = async (messageIndex: number) => {
|
||||
@@ -915,12 +584,19 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
return
|
||||
}
|
||||
|
||||
// Get previous XML and restore diagram state
|
||||
const previousXml = getPreviousXml(userMessageIndex)
|
||||
restoreDiagramFromSnapshot(savedXml)
|
||||
// Restore the diagram to the saved state (skip validation for trusted snapshots)
|
||||
onDisplayChart(savedXml, true)
|
||||
|
||||
// Update ref directly to ensure edit_diagram has the correct XML
|
||||
chartXMLRef.current = savedXml
|
||||
|
||||
// Clean up snapshots for messages after the user message (they will be removed)
|
||||
cleanupSnapshotsAfter(userMessageIndex)
|
||||
for (const key of xmlSnapshotsRef.current.keys()) {
|
||||
if (key > userMessageIndex) {
|
||||
xmlSnapshotsRef.current.delete(key)
|
||||
}
|
||||
}
|
||||
saveXmlSnapshots()
|
||||
|
||||
// Remove the user message AND assistant message onwards (sendMessage will re-add the user message)
|
||||
// Use flushSync to ensure state update is processed synchronously before sending
|
||||
@@ -929,13 +605,20 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
setMessages(newMessages)
|
||||
})
|
||||
|
||||
// Check all quota limits
|
||||
if (!checkAllQuotaLimits()) return
|
||||
|
||||
// Now send the message after state is guaranteed to be updated
|
||||
sendChatMessage(userParts, savedXml, previousXml, sessionId)
|
||||
|
||||
// Token count is tracked in onFinish with actual server usage
|
||||
const accessCode = localStorage.getItem(STORAGE_ACCESS_CODE_KEY) || ""
|
||||
sendMessage(
|
||||
{ parts: userParts },
|
||||
{
|
||||
body: {
|
||||
xml: savedXml,
|
||||
sessionId,
|
||||
},
|
||||
headers: {
|
||||
"x-access-code": accessCode,
|
||||
},
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
const handleEditMessage = async (messageIndex: number, newText: string) => {
|
||||
@@ -955,12 +638,19 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
return
|
||||
}
|
||||
|
||||
// Get previous XML and restore diagram state
|
||||
const previousXml = getPreviousXml(messageIndex)
|
||||
restoreDiagramFromSnapshot(savedXml)
|
||||
// Restore the diagram to the saved state (skip validation for trusted snapshots)
|
||||
onDisplayChart(savedXml, true)
|
||||
|
||||
// Update ref directly to ensure edit_diagram has the correct XML
|
||||
chartXMLRef.current = savedXml
|
||||
|
||||
// Clean up snapshots for messages after the user message (they will be removed)
|
||||
cleanupSnapshotsAfter(messageIndex)
|
||||
for (const key of xmlSnapshotsRef.current.keys()) {
|
||||
if (key > messageIndex) {
|
||||
xmlSnapshotsRef.current.delete(key)
|
||||
}
|
||||
}
|
||||
saveXmlSnapshots()
|
||||
|
||||
// Create new parts with updated text
|
||||
const newParts = message.parts?.map((part: any) => {
|
||||
@@ -977,12 +667,20 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
setMessages(newMessages)
|
||||
})
|
||||
|
||||
// Check all quota limits
|
||||
if (!checkAllQuotaLimits()) return
|
||||
|
||||
// Now send the edited message after state is guaranteed to be updated
|
||||
sendChatMessage(newParts, savedXml, previousXml, sessionId)
|
||||
// Token count is tracked in onFinish with actual server usage
|
||||
const accessCode = localStorage.getItem(STORAGE_ACCESS_CODE_KEY) || ""
|
||||
sendMessage(
|
||||
{ parts: newParts },
|
||||
{
|
||||
body: {
|
||||
xml: savedXml,
|
||||
sessionId,
|
||||
},
|
||||
headers: {
|
||||
"x-access-code": accessCode,
|
||||
},
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
// Collapsed view (desktop only)
|
||||
@@ -1017,14 +715,7 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
<Toaster
|
||||
position="bottom-center"
|
||||
richColors
|
||||
expand
|
||||
style={{ position: "absolute" }}
|
||||
toastOptions={{
|
||||
style: {
|
||||
maxWidth: "480px",
|
||||
},
|
||||
duration: 2000,
|
||||
}}
|
||||
/>
|
||||
{/* Header */}
|
||||
<header
|
||||
@@ -1049,43 +740,23 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
{!isMobile && (
|
||||
<Link
|
||||
href="/about"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="text-sm text-muted-foreground hover:text-foreground transition-colors ml-2"
|
||||
>
|
||||
About
|
||||
</Link>
|
||||
)}
|
||||
{!isMobile && (
|
||||
<Link
|
||||
href="/about"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
<ButtonWithTooltip
|
||||
tooltipContent="Recent generation failures were caused by our AI provider's infrastructure issue, not the app code. After extensive debugging, I've switched providers and observed 6 hours of stability. If issues persist, please report on GitHub."
|
||||
variant="ghost"
|
||||
size="icon"
|
||||
className="h-6 w-6 text-green-500 hover:text-green-600"
|
||||
>
|
||||
<ButtonWithTooltip
|
||||
tooltipContent="Due to high usage, I have changed the model to minimax-m2 and added some usage limits. See About page for details."
|
||||
variant="ghost"
|
||||
size="icon"
|
||||
className="h-6 w-6 text-amber-500 hover:text-amber-600"
|
||||
>
|
||||
<AlertTriangle className="h-4 w-4" />
|
||||
</ButtonWithTooltip>
|
||||
</Link>
|
||||
<CheckCircle className="h-4 w-4" />
|
||||
</ButtonWithTooltip>
|
||||
)}
|
||||
</div>
|
||||
<div className="flex items-center gap-1">
|
||||
<ButtonWithTooltip
|
||||
tooltipContent="Start fresh chat"
|
||||
variant="ghost"
|
||||
size="icon"
|
||||
onClick={() => setShowNewChatDialog(true)}
|
||||
className="hover:bg-accent"
|
||||
>
|
||||
<MessageSquarePlus
|
||||
className={`${isMobile ? "h-4 w-4" : "h-5 w-5"} text-muted-foreground`}
|
||||
/>
|
||||
</ButtonWithTooltip>
|
||||
<div className="w-px h-5 bg-border mx-1" />
|
||||
<a
|
||||
href="https://github.com/DayuanJiang/next-ai-draw-io"
|
||||
target="_blank"
|
||||
@@ -1128,10 +799,8 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
messages={messages}
|
||||
setInput={setInput}
|
||||
setFiles={handleFileChange}
|
||||
processedToolCallsRef={processedToolCallsRef}
|
||||
sessionId={sessionId}
|
||||
onRegenerate={handleRegenerate}
|
||||
status={status}
|
||||
onEditMessage={handleEditMessage}
|
||||
/>
|
||||
</main>
|
||||
@@ -1145,14 +814,31 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
status={status}
|
||||
onSubmit={onFormSubmit}
|
||||
onChange={handleInputChange}
|
||||
onClearChat={handleNewChat}
|
||||
onClearChat={() => {
|
||||
setMessages([])
|
||||
clearDiagram()
|
||||
const newSessionId = `session-${Date.now()}-${Math.random()
|
||||
.toString(36)
|
||||
.slice(2, 9)}`
|
||||
setSessionId(newSessionId)
|
||||
xmlSnapshotsRef.current.clear()
|
||||
// Clear localStorage
|
||||
localStorage.removeItem(STORAGE_MESSAGES_KEY)
|
||||
localStorage.removeItem(STORAGE_XML_SNAPSHOTS_KEY)
|
||||
localStorage.removeItem(STORAGE_DIAGRAM_XML_KEY)
|
||||
localStorage.setItem(
|
||||
STORAGE_SESSION_ID_KEY,
|
||||
newSessionId,
|
||||
)
|
||||
}}
|
||||
files={files}
|
||||
onFileChange={handleFileChange}
|
||||
pdfData={pdfData}
|
||||
showHistory={showHistory}
|
||||
onToggleHistory={setShowHistory}
|
||||
sessionId={sessionId}
|
||||
error={error}
|
||||
drawioUi={drawioUi}
|
||||
onToggleDrawioUi={onToggleDrawioUi}
|
||||
/>
|
||||
</footer>
|
||||
|
||||
@@ -1160,16 +846,6 @@ Please retry with an adjusted search pattern or use display_diagram if retries a
|
||||
open={showSettingsDialog}
|
||||
onOpenChange={setShowSettingsDialog}
|
||||
onCloseProtectionChange={onCloseProtectionChange}
|
||||
drawioUi={drawioUi}
|
||||
onToggleDrawioUi={onToggleDrawioUi}
|
||||
darkMode={darkMode}
|
||||
onToggleDarkMode={onToggleDarkMode}
|
||||
/>
|
||||
|
||||
<ResetWarningModal
|
||||
open={showNewChatDialog}
|
||||
onOpenChange={setShowNewChatDialog}
|
||||
onClear={handleNewChat}
|
||||
/>
|
||||
</div>
|
||||
)
|
||||
|
||||
@@ -1,31 +1,15 @@
|
||||
"use client"
|
||||
|
||||
import { FileCode, FileText, Loader2, X } from "lucide-react"
|
||||
import { X } from "lucide-react"
|
||||
import Image from "next/image"
|
||||
import { useEffect, useRef, useState } from "react"
|
||||
import { isPdfFile, isTextFile } from "@/lib/pdf-utils"
|
||||
|
||||
function formatCharCount(count: number): string {
|
||||
if (count >= 1000) {
|
||||
return `${(count / 1000).toFixed(1)}k`
|
||||
}
|
||||
return String(count)
|
||||
}
|
||||
|
||||
interface FilePreviewListProps {
|
||||
files: File[]
|
||||
onRemoveFile: (fileToRemove: File) => void
|
||||
pdfData?: Map<
|
||||
File,
|
||||
{ text: string; charCount: number; isExtracting: boolean }
|
||||
>
|
||||
}
|
||||
|
||||
export function FilePreviewList({
|
||||
files,
|
||||
onRemoveFile,
|
||||
pdfData = new Map(),
|
||||
}: FilePreviewListProps) {
|
||||
export function FilePreviewList({ files, onRemoveFile }: FilePreviewListProps) {
|
||||
const [selectedImage, setSelectedImage] = useState<string | null>(null)
|
||||
const [imageUrls, setImageUrls] = useState<Map<File, string>>(new Map())
|
||||
const imageUrlsRef = useRef<Map<File, string>>(new Map())
|
||||
@@ -64,8 +48,6 @@ export function FilePreviewList({
|
||||
imageUrlsRef.current.forEach((url) => {
|
||||
URL.revokeObjectURL(url)
|
||||
})
|
||||
// Clear the ref so StrictMode remount creates fresh URLs
|
||||
imageUrlsRef.current = new Map()
|
||||
}
|
||||
}, [])
|
||||
|
||||
@@ -86,19 +68,12 @@ export function FilePreviewList({
|
||||
<div className="flex flex-wrap gap-2 mt-2 p-2 bg-muted/50 rounded-md">
|
||||
{files.map((file, index) => {
|
||||
const imageUrl = imageUrls.get(file) || null
|
||||
const pdfInfo = pdfData.get(file)
|
||||
return (
|
||||
<div key={file.name + index} className="relative group">
|
||||
<div
|
||||
className={`w-20 h-20 border rounded-md overflow-hidden bg-muted ${
|
||||
file.type.startsWith("image/") && imageUrl
|
||||
? "cursor-pointer"
|
||||
: ""
|
||||
}`}
|
||||
className="w-20 h-20 border rounded-md overflow-hidden bg-muted cursor-pointer"
|
||||
onClick={() =>
|
||||
file.type.startsWith("image/") &&
|
||||
imageUrl &&
|
||||
setSelectedImage(imageUrl)
|
||||
imageUrl && setSelectedImage(imageUrl)
|
||||
}
|
||||
>
|
||||
{file.type.startsWith("image/") && imageUrl ? (
|
||||
@@ -108,35 +83,7 @@ export function FilePreviewList({
|
||||
width={80}
|
||||
height={80}
|
||||
className="object-cover w-full h-full"
|
||||
unoptimized
|
||||
/>
|
||||
) : isPdfFile(file) || isTextFile(file) ? (
|
||||
<div className="flex flex-col items-center justify-center h-full p-1">
|
||||
{pdfInfo?.isExtracting ? (
|
||||
<Loader2 className="h-6 w-6 text-blue-500 mb-1 animate-spin" />
|
||||
) : isPdfFile(file) ? (
|
||||
<FileText className="h-6 w-6 text-red-500 mb-1" />
|
||||
) : (
|
||||
<FileCode className="h-6 w-6 text-blue-500 mb-1" />
|
||||
)}
|
||||
<span className="text-xs text-center truncate w-full px-1">
|
||||
{file.name.length > 10
|
||||
? `${file.name.slice(0, 7)}...`
|
||||
: file.name}
|
||||
</span>
|
||||
{pdfInfo?.isExtracting ? (
|
||||
<span className="text-[10px] text-muted-foreground">
|
||||
Reading...
|
||||
</span>
|
||||
) : pdfInfo?.charCount ? (
|
||||
<span className="text-[10px] text-green-600 font-medium">
|
||||
{formatCharCount(
|
||||
pdfInfo.charCount,
|
||||
)}{" "}
|
||||
chars
|
||||
</span>
|
||||
) : null}
|
||||
</div>
|
||||
) : (
|
||||
<div className="flex items-center justify-center h-full text-xs text-center p-1">
|
||||
{file.name}
|
||||
@@ -177,7 +124,6 @@ export function FilePreviewList({
|
||||
height={900}
|
||||
className="object-contain max-w-full max-h-[90vh] w-auto h-auto"
|
||||
onClick={(e) => e.stopPropagation()}
|
||||
unoptimized
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -1,115 +0,0 @@
|
||||
"use client"
|
||||
|
||||
import { Coffee, X } from "lucide-react"
|
||||
import Link from "next/link"
|
||||
import type React from "react"
|
||||
import { FaGithub } from "react-icons/fa"
|
||||
|
||||
interface QuotaLimitToastProps {
|
||||
type?: "request" | "token"
|
||||
used: number
|
||||
limit: number
|
||||
onDismiss: () => void
|
||||
}
|
||||
|
||||
export function QuotaLimitToast({
|
||||
type = "request",
|
||||
used,
|
||||
limit,
|
||||
onDismiss,
|
||||
}: QuotaLimitToastProps) {
|
||||
const isTokenLimit = type === "token"
|
||||
const formatNumber = (n: number) =>
|
||||
n >= 1000 ? `${(n / 1000).toFixed(1)}k` : n.toString()
|
||||
const handleKeyDown = (e: React.KeyboardEvent) => {
|
||||
if (e.key === "Escape") {
|
||||
e.preventDefault()
|
||||
onDismiss()
|
||||
}
|
||||
}
|
||||
|
||||
return (
|
||||
<div
|
||||
role="alert"
|
||||
aria-live="polite"
|
||||
tabIndex={0}
|
||||
onKeyDown={handleKeyDown}
|
||||
className="relative w-[400px] overflow-hidden rounded-xl border border-border/50 bg-card p-5 shadow-soft animate-message-in"
|
||||
>
|
||||
{/* Close button */}
|
||||
<button
|
||||
onClick={onDismiss}
|
||||
className="absolute right-3 top-3 p-1.5 rounded-full text-muted-foreground/60 hover:text-foreground hover:bg-muted transition-colors"
|
||||
aria-label="Dismiss"
|
||||
>
|
||||
<X className="w-4 h-4" />
|
||||
</button>
|
||||
|
||||
{/* Title row with icon */}
|
||||
<div className="flex items-center gap-2.5 mb-3 pr-6">
|
||||
<div className="flex-shrink-0 w-8 h-8 rounded-lg bg-accent flex items-center justify-center">
|
||||
<Coffee
|
||||
className="w-4 h-4 text-accent-foreground"
|
||||
strokeWidth={2}
|
||||
/>
|
||||
</div>
|
||||
<h3 className="font-semibold text-foreground text-sm">
|
||||
{isTokenLimit
|
||||
? "Daily Token Limit Reached"
|
||||
: "Daily Quota Reached"}
|
||||
</h3>
|
||||
<span className="px-2 py-0.5 text-xs font-medium rounded-md bg-muted text-muted-foreground">
|
||||
{isTokenLimit
|
||||
? `${formatNumber(used)}/${formatNumber(limit)} tokens`
|
||||
: `${used}/${limit}`}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
{/* Message */}
|
||||
<div className="text-sm text-muted-foreground leading-relaxed mb-4 space-y-2">
|
||||
<p>
|
||||
Oops — you've reached the daily{" "}
|
||||
{isTokenLimit ? "token" : "API"} limit for this demo! As an
|
||||
indie developer covering all the API costs myself, I have to
|
||||
set these limits to keep things sustainable.{" "}
|
||||
<Link
|
||||
href="/about"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="inline-flex items-center gap-1 text-amber-600 font-medium hover:text-amber-700 hover:underline"
|
||||
>
|
||||
Learn more →
|
||||
</Link>
|
||||
</p>
|
||||
<p>
|
||||
<strong>Tip:</strong> You can use your own API key (click
|
||||
the Settings icon) or self-host the project to bypass these
|
||||
limits.
|
||||
</p>
|
||||
<p>Your limit resets tomorrow. Thanks for understanding!</p>
|
||||
</div>
|
||||
|
||||
{/* Action buttons */}
|
||||
<div className="flex items-center gap-2">
|
||||
<a
|
||||
href="https://github.com/DayuanJiang/next-ai-draw-io"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="inline-flex items-center gap-1.5 px-3 py-1.5 text-xs font-medium rounded-lg bg-primary text-primary-foreground hover:bg-primary/90 transition-colors"
|
||||
>
|
||||
<FaGithub className="w-3.5 h-3.5" />
|
||||
Self-host
|
||||
</a>
|
||||
<a
|
||||
href="https://github.com/sponsors/DayuanJiang"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="inline-flex items-center gap-1.5 px-3 py-1.5 text-xs font-medium rounded-lg border border-border text-foreground hover:bg-muted transition-colors"
|
||||
>
|
||||
<Coffee className="w-3.5 h-3.5" />
|
||||
Sponsor
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
@@ -1,94 +1,37 @@
|
||||
"use client"
|
||||
|
||||
import { Moon, Sun } from "lucide-react"
|
||||
import { useEffect, useState } from "react"
|
||||
import { Button } from "@/components/ui/button"
|
||||
import {
|
||||
Dialog,
|
||||
DialogContent,
|
||||
DialogDescription,
|
||||
DialogFooter,
|
||||
DialogHeader,
|
||||
DialogTitle,
|
||||
} from "@/components/ui/dialog"
|
||||
import { Input } from "@/components/ui/input"
|
||||
import { Label } from "@/components/ui/label"
|
||||
import {
|
||||
Select,
|
||||
SelectContent,
|
||||
SelectItem,
|
||||
SelectTrigger,
|
||||
SelectValue,
|
||||
} from "@/components/ui/select"
|
||||
import { Switch } from "@/components/ui/switch"
|
||||
|
||||
interface SettingsDialogProps {
|
||||
open: boolean
|
||||
onOpenChange: (open: boolean) => void
|
||||
onCloseProtectionChange?: (enabled: boolean) => void
|
||||
drawioUi: "min" | "sketch"
|
||||
onToggleDrawioUi: () => void
|
||||
darkMode: boolean
|
||||
onToggleDarkMode: () => void
|
||||
}
|
||||
|
||||
export const STORAGE_ACCESS_CODE_KEY = "next-ai-draw-io-access-code"
|
||||
export const STORAGE_CLOSE_PROTECTION_KEY = "next-ai-draw-io-close-protection"
|
||||
const STORAGE_ACCESS_CODE_REQUIRED_KEY = "next-ai-draw-io-access-code-required"
|
||||
export const STORAGE_AI_PROVIDER_KEY = "next-ai-draw-io-ai-provider"
|
||||
export const STORAGE_AI_BASE_URL_KEY = "next-ai-draw-io-ai-base-url"
|
||||
export const STORAGE_AI_API_KEY_KEY = "next-ai-draw-io-ai-api-key"
|
||||
export const STORAGE_AI_MODEL_KEY = "next-ai-draw-io-ai-model"
|
||||
|
||||
function getStoredAccessCodeRequired(): boolean | null {
|
||||
if (typeof window === "undefined") return null
|
||||
const stored = localStorage.getItem(STORAGE_ACCESS_CODE_REQUIRED_KEY)
|
||||
if (stored === null) return null
|
||||
return stored === "true"
|
||||
}
|
||||
|
||||
export function SettingsDialog({
|
||||
open,
|
||||
onOpenChange,
|
||||
onCloseProtectionChange,
|
||||
drawioUi,
|
||||
onToggleDrawioUi,
|
||||
darkMode,
|
||||
onToggleDarkMode,
|
||||
}: SettingsDialogProps) {
|
||||
const [accessCode, setAccessCode] = useState("")
|
||||
const [closeProtection, setCloseProtection] = useState(true)
|
||||
const [isVerifying, setIsVerifying] = useState(false)
|
||||
const [error, setError] = useState("")
|
||||
const [accessCodeRequired, setAccessCodeRequired] = useState(
|
||||
() => getStoredAccessCodeRequired() ?? false,
|
||||
)
|
||||
const [provider, setProvider] = useState("")
|
||||
const [baseUrl, setBaseUrl] = useState("")
|
||||
const [apiKey, setApiKey] = useState("")
|
||||
const [modelId, setModelId] = useState("")
|
||||
|
||||
useEffect(() => {
|
||||
// Only fetch if not cached in localStorage
|
||||
if (getStoredAccessCodeRequired() !== null) return
|
||||
|
||||
fetch("/api/config")
|
||||
.then((res) => {
|
||||
if (!res.ok) throw new Error(`HTTP ${res.status}`)
|
||||
return res.json()
|
||||
})
|
||||
.then((data) => {
|
||||
const required = data?.accessCodeRequired === true
|
||||
localStorage.setItem(
|
||||
STORAGE_ACCESS_CODE_REQUIRED_KEY,
|
||||
String(required),
|
||||
)
|
||||
setAccessCodeRequired(required)
|
||||
})
|
||||
.catch(() => {
|
||||
// Don't cache on error - allow retry on next mount
|
||||
setAccessCodeRequired(false)
|
||||
})
|
||||
}, [])
|
||||
|
||||
useEffect(() => {
|
||||
if (open) {
|
||||
@@ -101,24 +44,16 @@ export function SettingsDialog({
|
||||
)
|
||||
// Default to true if not set
|
||||
setCloseProtection(storedCloseProtection !== "false")
|
||||
|
||||
// Load AI provider settings
|
||||
setProvider(localStorage.getItem(STORAGE_AI_PROVIDER_KEY) || "")
|
||||
setBaseUrl(localStorage.getItem(STORAGE_AI_BASE_URL_KEY) || "")
|
||||
setApiKey(localStorage.getItem(STORAGE_AI_API_KEY_KEY) || "")
|
||||
setModelId(localStorage.getItem(STORAGE_AI_MODEL_KEY) || "")
|
||||
|
||||
setError("")
|
||||
}
|
||||
}, [open])
|
||||
|
||||
const handleSave = async () => {
|
||||
if (!accessCodeRequired) return
|
||||
|
||||
setError("")
|
||||
setIsVerifying(true)
|
||||
|
||||
try {
|
||||
// Verify access code with server
|
||||
const response = await fetch("/api/verify-access-code", {
|
||||
method: "POST",
|
||||
headers: {
|
||||
@@ -130,10 +65,17 @@ export function SettingsDialog({
|
||||
|
||||
if (!data.valid) {
|
||||
setError(data.message || "Invalid access code")
|
||||
setIsVerifying(false)
|
||||
return
|
||||
}
|
||||
|
||||
// Save settings only if verification passes
|
||||
localStorage.setItem(STORAGE_ACCESS_CODE_KEY, accessCode.trim())
|
||||
localStorage.setItem(
|
||||
STORAGE_CLOSE_PROTECTION_KEY,
|
||||
closeProtection.toString(),
|
||||
)
|
||||
onCloseProtectionChange?.(closeProtection)
|
||||
onOpenChange(false)
|
||||
} catch {
|
||||
setError("Failed to verify access code")
|
||||
@@ -155,258 +97,31 @@ export function SettingsDialog({
|
||||
<DialogHeader>
|
||||
<DialogTitle>Settings</DialogTitle>
|
||||
<DialogDescription>
|
||||
Configure your application settings.
|
||||
Configure your access settings.
|
||||
</DialogDescription>
|
||||
</DialogHeader>
|
||||
<div className="space-y-4 py-2">
|
||||
{accessCodeRequired && (
|
||||
<div className="space-y-2">
|
||||
<Label htmlFor="access-code">Access Code</Label>
|
||||
<div className="flex gap-2">
|
||||
<Input
|
||||
id="access-code"
|
||||
type="password"
|
||||
value={accessCode}
|
||||
onChange={(e) =>
|
||||
setAccessCode(e.target.value)
|
||||
}
|
||||
onKeyDown={handleKeyDown}
|
||||
placeholder="Enter access code"
|
||||
autoComplete="off"
|
||||
/>
|
||||
<Button
|
||||
onClick={handleSave}
|
||||
disabled={isVerifying || !accessCode.trim()}
|
||||
>
|
||||
{isVerifying ? "..." : "Save"}
|
||||
</Button>
|
||||
</div>
|
||||
<p className="text-[0.8rem] text-muted-foreground">
|
||||
Required to use this application.
|
||||
</p>
|
||||
{error && (
|
||||
<p className="text-[0.8rem] text-destructive">
|
||||
{error}
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
<div className="space-y-2">
|
||||
<Label>AI Provider Settings</Label>
|
||||
<label className="text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70">
|
||||
Access Code
|
||||
</label>
|
||||
<Input
|
||||
type="password"
|
||||
value={accessCode}
|
||||
onChange={(e) => setAccessCode(e.target.value)}
|
||||
onKeyDown={handleKeyDown}
|
||||
placeholder="Enter access code"
|
||||
autoComplete="off"
|
||||
/>
|
||||
<p className="text-[0.8rem] text-muted-foreground">
|
||||
Use your own API key to bypass usage limits. Your
|
||||
key is stored locally in your browser and is never
|
||||
stored on the server.
|
||||
Required if the server has enabled access control.
|
||||
</p>
|
||||
<div className="space-y-3 pt-2">
|
||||
<div className="space-y-2">
|
||||
<Label htmlFor="ai-provider">Provider</Label>
|
||||
<Select
|
||||
value={provider || "default"}
|
||||
onValueChange={(value) => {
|
||||
const actualValue =
|
||||
value === "default" ? "" : value
|
||||
setProvider(actualValue)
|
||||
localStorage.setItem(
|
||||
STORAGE_AI_PROVIDER_KEY,
|
||||
actualValue,
|
||||
)
|
||||
}}
|
||||
>
|
||||
<SelectTrigger id="ai-provider">
|
||||
<SelectValue placeholder="Use Server Default" />
|
||||
</SelectTrigger>
|
||||
<SelectContent>
|
||||
<SelectItem value="default">
|
||||
Use Server Default
|
||||
</SelectItem>
|
||||
<SelectItem value="openai">
|
||||
OpenAI
|
||||
</SelectItem>
|
||||
<SelectItem value="anthropic">
|
||||
Anthropic
|
||||
</SelectItem>
|
||||
<SelectItem value="google">
|
||||
Google
|
||||
</SelectItem>
|
||||
<SelectItem value="azure">
|
||||
Azure OpenAI
|
||||
</SelectItem>
|
||||
<SelectItem value="openrouter">
|
||||
OpenRouter
|
||||
</SelectItem>
|
||||
<SelectItem value="deepseek">
|
||||
DeepSeek
|
||||
</SelectItem>
|
||||
<SelectItem value="siliconflow">
|
||||
SiliconFlow
|
||||
</SelectItem>
|
||||
</SelectContent>
|
||||
</Select>
|
||||
</div>
|
||||
{provider && provider !== "default" && (
|
||||
<>
|
||||
<div className="space-y-2">
|
||||
<Label htmlFor="ai-model">
|
||||
Model ID
|
||||
</Label>
|
||||
<Input
|
||||
id="ai-model"
|
||||
value={modelId}
|
||||
onChange={(e) => {
|
||||
setModelId(e.target.value)
|
||||
localStorage.setItem(
|
||||
STORAGE_AI_MODEL_KEY,
|
||||
e.target.value,
|
||||
)
|
||||
}}
|
||||
placeholder={
|
||||
provider === "openai"
|
||||
? "e.g., gpt-4o"
|
||||
: provider === "anthropic"
|
||||
? "e.g., claude-sonnet-4-5"
|
||||
: provider === "google"
|
||||
? "e.g., gemini-2.0-flash-exp"
|
||||
: provider ===
|
||||
"deepseek"
|
||||
? "e.g., deepseek-chat"
|
||||
: "Model ID"
|
||||
}
|
||||
/>
|
||||
</div>
|
||||
<div className="space-y-2">
|
||||
<Label htmlFor="ai-api-key">
|
||||
API Key
|
||||
</Label>
|
||||
<Input
|
||||
id="ai-api-key"
|
||||
type="password"
|
||||
value={apiKey}
|
||||
onChange={(e) => {
|
||||
setApiKey(e.target.value)
|
||||
localStorage.setItem(
|
||||
STORAGE_AI_API_KEY_KEY,
|
||||
e.target.value,
|
||||
)
|
||||
}}
|
||||
placeholder="Your API key"
|
||||
autoComplete="off"
|
||||
/>
|
||||
<p className="text-[0.8rem] text-muted-foreground">
|
||||
Overrides{" "}
|
||||
{provider === "openai"
|
||||
? "OPENAI_API_KEY"
|
||||
: provider === "anthropic"
|
||||
? "ANTHROPIC_API_KEY"
|
||||
: provider === "google"
|
||||
? "GOOGLE_GENERATIVE_AI_API_KEY"
|
||||
: provider === "azure"
|
||||
? "AZURE_API_KEY"
|
||||
: provider ===
|
||||
"openrouter"
|
||||
? "OPENROUTER_API_KEY"
|
||||
: provider ===
|
||||
"deepseek"
|
||||
? "DEEPSEEK_API_KEY"
|
||||
: provider ===
|
||||
"siliconflow"
|
||||
? "SILICONFLOW_API_KEY"
|
||||
: "server API key"}
|
||||
</p>
|
||||
</div>
|
||||
<div className="space-y-2">
|
||||
<Label htmlFor="ai-base-url">
|
||||
Base URL (optional)
|
||||
</Label>
|
||||
<Input
|
||||
id="ai-base-url"
|
||||
value={baseUrl}
|
||||
onChange={(e) => {
|
||||
setBaseUrl(e.target.value)
|
||||
localStorage.setItem(
|
||||
STORAGE_AI_BASE_URL_KEY,
|
||||
e.target.value,
|
||||
)
|
||||
}}
|
||||
placeholder={
|
||||
provider === "anthropic"
|
||||
? "https://api.anthropic.com/v1"
|
||||
: provider === "siliconflow"
|
||||
? "https://api.siliconflow.com/v1"
|
||||
: "Custom endpoint URL"
|
||||
}
|
||||
/>
|
||||
</div>
|
||||
<Button
|
||||
variant="outline"
|
||||
size="sm"
|
||||
className="w-full"
|
||||
onClick={() => {
|
||||
localStorage.removeItem(
|
||||
STORAGE_AI_PROVIDER_KEY,
|
||||
)
|
||||
localStorage.removeItem(
|
||||
STORAGE_AI_BASE_URL_KEY,
|
||||
)
|
||||
localStorage.removeItem(
|
||||
STORAGE_AI_API_KEY_KEY,
|
||||
)
|
||||
localStorage.removeItem(
|
||||
STORAGE_AI_MODEL_KEY,
|
||||
)
|
||||
setProvider("")
|
||||
setBaseUrl("")
|
||||
setApiKey("")
|
||||
setModelId("")
|
||||
}}
|
||||
>
|
||||
Clear Settings
|
||||
</Button>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="flex items-center justify-between">
|
||||
<div className="space-y-0.5">
|
||||
<Label htmlFor="theme-toggle">Theme</Label>
|
||||
<p className="text-[0.8rem] text-muted-foreground">
|
||||
Dark/Light mode for interface and DrawIO canvas.
|
||||
{error && (
|
||||
<p className="text-[0.8rem] text-destructive">
|
||||
{error}
|
||||
</p>
|
||||
</div>
|
||||
<Button
|
||||
id="theme-toggle"
|
||||
variant="outline"
|
||||
size="icon"
|
||||
onClick={onToggleDarkMode}
|
||||
>
|
||||
{darkMode ? (
|
||||
<Sun className="h-4 w-4" />
|
||||
) : (
|
||||
<Moon className="h-4 w-4" />
|
||||
)}
|
||||
</Button>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div className="flex items-center justify-between">
|
||||
<div className="space-y-0.5">
|
||||
<Label htmlFor="drawio-ui">DrawIO Style</Label>
|
||||
<p className="text-[0.8rem] text-muted-foreground">
|
||||
Canvas style:{" "}
|
||||
{drawioUi === "min" ? "Minimal" : "Sketch"}
|
||||
</p>
|
||||
</div>
|
||||
<Button
|
||||
id="drawio-ui"
|
||||
variant="outline"
|
||||
size="sm"
|
||||
onClick={onToggleDrawioUi}
|
||||
>
|
||||
Switch to{" "}
|
||||
{drawioUi === "min" ? "Sketch" : "Minimal"}
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
<div className="flex items-center justify-between">
|
||||
<div className="space-y-0.5">
|
||||
<Label htmlFor="close-protection">
|
||||
@@ -419,17 +134,21 @@ export function SettingsDialog({
|
||||
<Switch
|
||||
id="close-protection"
|
||||
checked={closeProtection}
|
||||
onCheckedChange={(checked) => {
|
||||
setCloseProtection(checked)
|
||||
localStorage.setItem(
|
||||
STORAGE_CLOSE_PROTECTION_KEY,
|
||||
checked.toString(),
|
||||
)
|
||||
onCloseProtectionChange?.(checked)
|
||||
}}
|
||||
onCheckedChange={setCloseProtection}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
<DialogFooter>
|
||||
<Button
|
||||
variant="outline"
|
||||
onClick={() => onOpenChange(false)}
|
||||
>
|
||||
Cancel
|
||||
</Button>
|
||||
<Button onClick={handleSave} disabled={isVerifying}>
|
||||
{isVerifying ? "Verifying..." : "Save"}
|
||||
</Button>
|
||||
</DialogFooter>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
)
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
"use client"
|
||||
|
||||
import * as CollapsiblePrimitive from "@radix-ui/react-collapsible"
|
||||
|
||||
function Collapsible({
|
||||
...props
|
||||
}: React.ComponentProps<typeof CollapsiblePrimitive.Root>) {
|
||||
return <CollapsiblePrimitive.Root data-slot="collapsible" {...props} />
|
||||
}
|
||||
|
||||
function CollapsibleTrigger({
|
||||
...props
|
||||
}: React.ComponentProps<typeof CollapsiblePrimitive.CollapsibleTrigger>) {
|
||||
return (
|
||||
<CollapsiblePrimitive.CollapsibleTrigger
|
||||
data-slot="collapsible-trigger"
|
||||
{...props}
|
||||
/>
|
||||
)
|
||||
}
|
||||
|
||||
function CollapsibleContent({
|
||||
...props
|
||||
}: React.ComponentProps<typeof CollapsiblePrimitive.CollapsibleContent>) {
|
||||
return (
|
||||
<CollapsiblePrimitive.CollapsibleContent
|
||||
data-slot="collapsible-content"
|
||||
{...props}
|
||||
/>
|
||||
)
|
||||
}
|
||||
|
||||
export { Collapsible, CollapsibleTrigger, CollapsibleContent }
|
||||
@@ -3,9 +3,8 @@
|
||||
import type React from "react"
|
||||
import { createContext, useContext, useRef, useState } from "react"
|
||||
import type { DrawIoEmbedRef } from "react-drawio"
|
||||
import { STORAGE_DIAGRAM_XML_KEY } from "@/components/chat-panel"
|
||||
import type { ExportFormat } from "@/components/save-dialog"
|
||||
import { extractDiagramXML, validateAndFixXml } from "../lib/utils"
|
||||
import { extractDiagramXML, validateMxCellStructure } from "../lib/utils"
|
||||
|
||||
interface DiagramContextType {
|
||||
chartXML: string
|
||||
@@ -25,7 +24,6 @@ interface DiagramContextType {
|
||||
) => void
|
||||
isDrawioReady: boolean
|
||||
onDrawioLoad: () => void
|
||||
resetDrawioReady: () => void
|
||||
}
|
||||
|
||||
const DiagramContext = createContext<DiagramContextType | undefined>(undefined)
|
||||
@@ -47,16 +45,9 @@ export function DiagramProvider({ children }: { children: React.ReactNode }) {
|
||||
// Only set ready state once to prevent infinite loops
|
||||
if (hasCalledOnLoadRef.current) return
|
||||
hasCalledOnLoadRef.current = true
|
||||
// console.log("[DiagramContext] DrawIO loaded, setting ready state")
|
||||
console.log("[DiagramContext] DrawIO loaded, setting ready state")
|
||||
setIsDrawioReady(true)
|
||||
}
|
||||
|
||||
const resetDrawioReady = () => {
|
||||
// console.log("[DiagramContext] Resetting DrawIO ready state")
|
||||
hasCalledOnLoadRef.current = false
|
||||
setIsDrawioReady(false)
|
||||
}
|
||||
|
||||
// Track if we're expecting an export for file save (stores raw export data)
|
||||
const saveResolverRef = useRef<{
|
||||
resolver: ((data: string) => void) | null
|
||||
@@ -86,34 +77,21 @@ export function DiagramProvider({ children }: { children: React.ReactNode }) {
|
||||
chart: string,
|
||||
skipValidation?: boolean,
|
||||
): string | null => {
|
||||
let xmlToLoad = chart
|
||||
|
||||
// Validate XML structure before loading (unless skipped for internal use)
|
||||
if (!skipValidation) {
|
||||
const validation = validateAndFixXml(chart)
|
||||
if (!validation.valid) {
|
||||
console.warn(
|
||||
"[loadDiagram] Validation error:",
|
||||
validation.error,
|
||||
)
|
||||
return validation.error
|
||||
}
|
||||
// Use fixed XML if auto-fix was applied
|
||||
if (validation.fixed) {
|
||||
console.log(
|
||||
"[loadDiagram] Auto-fixed XML issues:",
|
||||
validation.fixes,
|
||||
)
|
||||
xmlToLoad = validation.fixed
|
||||
const validationError = validateMxCellStructure(chart)
|
||||
if (validationError) {
|
||||
console.warn("[loadDiagram] Validation error:", validationError)
|
||||
return validationError
|
||||
}
|
||||
}
|
||||
|
||||
// Keep chartXML in sync even when diagrams are injected (e.g., display_diagram tool)
|
||||
setChartXML(xmlToLoad)
|
||||
setChartXML(chart)
|
||||
|
||||
if (drawioRef.current) {
|
||||
drawioRef.current.load({
|
||||
xml: xmlToLoad,
|
||||
xml: chart,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -193,9 +171,6 @@ export function DiagramProvider({ children }: { children: React.ReactNode }) {
|
||||
fileContent = xmlContent
|
||||
mimeType = "application/xml"
|
||||
extension = ".drawio"
|
||||
|
||||
// Save to localStorage when user manually saves
|
||||
localStorage.setItem(STORAGE_DIAGRAM_XML_KEY, xmlContent)
|
||||
} else if (format === "png") {
|
||||
// PNG data comes as base64 data URL
|
||||
fileContent = exportData
|
||||
@@ -276,7 +251,6 @@ export function DiagramProvider({ children }: { children: React.ReactNode }) {
|
||||
saveDiagramToFile,
|
||||
isDrawioReady,
|
||||
onDrawioLoad,
|
||||
resetDrawioReady,
|
||||
}}
|
||||
>
|
||||
{children}
|
||||
|
||||
@@ -1,12 +0,0 @@
|
||||
services:
|
||||
drawio:
|
||||
image: jgraph/drawio:latest
|
||||
ports: ["8080:8080"]
|
||||
next-ai-draw-io:
|
||||
build:
|
||||
context: .
|
||||
args:
|
||||
- NEXT_PUBLIC_DRAWIO_BASE_URL=http://localhost:8080
|
||||
ports: ["3000:3000"]
|
||||
env_file: .env
|
||||
depends_on: [drawio]
|
||||
@@ -80,23 +80,13 @@ SILICONFLOW_BASE_URL=https://api.siliconflow.com/v1 # or https://api.siliconflo
|
||||
|
||||
```bash
|
||||
AZURE_API_KEY=your_api_key
|
||||
AZURE_RESOURCE_NAME=your-resource-name # Required: your Azure resource name
|
||||
AI_MODEL=your-deployment-name
|
||||
```
|
||||
|
||||
Or use a custom endpoint instead of resource name:
|
||||
Optional custom endpoint:
|
||||
|
||||
```bash
|
||||
AZURE_API_KEY=your_api_key
|
||||
AZURE_BASE_URL=https://your-resource.openai.azure.com # Alternative to AZURE_RESOURCE_NAME
|
||||
AI_MODEL=your-deployment-name
|
||||
```
|
||||
|
||||
Optional reasoning configuration:
|
||||
|
||||
```bash
|
||||
AZURE_REASONING_EFFORT=low # Optional: low, medium, high
|
||||
AZURE_REASONING_SUMMARY=detailed # Optional: none, brief, detailed
|
||||
AZURE_BASE_URL=https://your-resource.openai.azure.com
|
||||
```
|
||||
|
||||
### AWS Bedrock
|
||||
@@ -108,7 +98,7 @@ AWS_SECRET_ACCESS_KEY=your_secret_access_key
|
||||
AI_MODEL=anthropic.claude-sonnet-4-5-20250514-v1:0
|
||||
```
|
||||
|
||||
Note: On AWS (Lambda, EC2 with IAM role), credentials are automatically obtained from the IAM role.
|
||||
Note: On AWS (Amplify, Lambda, EC2 with IAM role), credentials are automatically obtained from the IAM role.
|
||||
|
||||
### OpenRouter
|
||||
|
||||
|
||||
@@ -1,39 +0,0 @@
|
||||
# Offline Deployment
|
||||
|
||||
Deploy Next AI Draw.io offline by self-hosting draw.io to replace `embed.diagrams.net`.
|
||||
|
||||
**Note:** `NEXT_PUBLIC_DRAWIO_BASE_URL` is a **build-time** variable. Changing it requires rebuilding the Docker image.
|
||||
|
||||
## Docker Compose Setup
|
||||
|
||||
1. Clone the repository and define API keys in `.env`.
|
||||
2. Create `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
drawio:
|
||||
image: jgraph/drawio:latest
|
||||
ports: ["8080:8080"]
|
||||
next-ai-draw-io:
|
||||
build:
|
||||
context: .
|
||||
args:
|
||||
- NEXT_PUBLIC_DRAWIO_BASE_URL=http://localhost:8080
|
||||
ports: ["3000:3000"]
|
||||
env_file: .env
|
||||
depends_on: [drawio]
|
||||
```
|
||||
|
||||
3. Run `docker compose up -d` and open `http://localhost:3000`.
|
||||
|
||||
## Configuration & Critical Warning
|
||||
|
||||
**The `NEXT_PUBLIC_DRAWIO_BASE_URL` must be accessible from the user's browser.**
|
||||
|
||||
| Scenario | URL Value |
|
||||
|----------|-----------|
|
||||
| Localhost | `http://localhost:8080` |
|
||||
| Remote/Server | `http://YOUR_SERVER_IP:8080` or `https://drawio.your-domain.com` |
|
||||
|
||||
**Do NOT use** internal Docker aliases like `http://drawio:8080`; the browser cannot resolve them.
|
||||
|
||||
33
env.example
33
env.example
@@ -11,49 +11,28 @@ AI_MODEL=global.anthropic.claude-sonnet-4-5-20250929-v1:0
|
||||
# AWS_REGION=us-east-1
|
||||
# AWS_ACCESS_KEY_ID=your-access-key-id
|
||||
# AWS_SECRET_ACCESS_KEY=your-secret-access-key
|
||||
# Note: Claude and Nova models support reasoning/extended thinking
|
||||
# BEDROCK_REASONING_BUDGET_TOKENS=12000 # Optional: Claude reasoning budget in tokens (1024-64000)
|
||||
# BEDROCK_REASONING_EFFORT=medium # Optional: Nova reasoning effort (low/medium/high)
|
||||
|
||||
# OpenAI Configuration
|
||||
# OPENAI_API_KEY=sk-...
|
||||
# OPENAI_BASE_URL=https://api.openai.com/v1 # Optional: Custom OpenAI-compatible endpoint
|
||||
# OPENAI_ORGANIZATION=org-... # Optional
|
||||
# OPENAI_PROJECT=proj_... # Optional
|
||||
# Note: o1/o3/gpt-5 models automatically enable reasoning summary (default: detailed)
|
||||
# OPENAI_REASONING_EFFORT=low # Optional: Reasoning effort (minimal/low/medium/high) - for o1/o3/gpt-5
|
||||
# OPENAI_REASONING_SUMMARY=detailed # Optional: Override reasoning summary (none/brief/detailed)
|
||||
|
||||
# Anthropic (Direct) Configuration
|
||||
# ANTHROPIC_API_KEY=sk-ant-...
|
||||
# ANTHROPIC_BASE_URL=https://your-custom-anthropic/v1
|
||||
# ANTHROPIC_THINKING_TYPE=enabled # Optional: Anthropic extended thinking (enabled)
|
||||
# ANTHROPIC_THINKING_BUDGET_TOKENS=12000 # Optional: Budget for extended thinking in tokens
|
||||
|
||||
# Google Generative AI Configuration
|
||||
# GOOGLE_GENERATIVE_AI_API_KEY=...
|
||||
# GOOGLE_BASE_URL=https://generativelanguage.googleapis.com/v1beta # Optional: Custom endpoint
|
||||
# GOOGLE_CANDIDATE_COUNT=1 # Optional: Number of candidates to generate
|
||||
# GOOGLE_TOP_K=40 # Optional: Top K sampling parameter
|
||||
# GOOGLE_TOP_P=0.95 # Optional: Nucleus sampling parameter
|
||||
# Note: Gemini 2.5/3 models automatically enable reasoning display (includeThoughts: true)
|
||||
# GOOGLE_THINKING_BUDGET=8192 # Optional: Gemini 2.5 thinking budget in tokens (for more/less thinking)
|
||||
# GOOGLE_THINKING_LEVEL=high # Optional: Gemini 3 thinking level (low/high)
|
||||
|
||||
# Azure OpenAI Configuration
|
||||
# Configure endpoint using ONE of these methods:
|
||||
# 1. AZURE_RESOURCE_NAME - SDK constructs: https://{name}.openai.azure.com/openai/v1{path}
|
||||
# 2. AZURE_BASE_URL - SDK appends /v1{path} to your URL
|
||||
# If both are set, AZURE_BASE_URL takes precedence.
|
||||
# AZURE_RESOURCE_NAME=your-resource-name
|
||||
# AZURE_API_KEY=...
|
||||
# AZURE_BASE_URL=https://your-resource.openai.azure.com/openai # Alternative: Custom endpoint
|
||||
# AZURE_REASONING_EFFORT=low # Optional: Azure reasoning effort (low, medium, high)
|
||||
# AZURE_REASONING_SUMMARY=detailed
|
||||
# AZURE_BASE_URL=https://your-resource.openai.azure.com # Optional: Custom endpoint (overrides resourceName)
|
||||
|
||||
# Ollama (Local) Configuration
|
||||
# OLLAMA_BASE_URL=http://localhost:11434/api # Optional, defaults to localhost
|
||||
# OLLAMA_ENABLE_THINKING=true # Optional: Enable thinking for models that support it (e.g., qwen3)
|
||||
|
||||
# OpenRouter Configuration
|
||||
# OPENROUTER_API_KEY=sk-or-v1-...
|
||||
@@ -81,13 +60,3 @@ AI_MODEL=global.anthropic.claude-sonnet-4-5-20250929-v1:0
|
||||
|
||||
# Access Control (Optional)
|
||||
# ACCESS_CODE_LIST=your-secret-code,another-code
|
||||
|
||||
# Draw.io Configuration (Optional)
|
||||
# NEXT_PUBLIC_DRAWIO_BASE_URL=https://embed.diagrams.net # Default: https://embed.diagrams.net
|
||||
# Use this to point to a self-hosted draw.io instance
|
||||
|
||||
# PDF Input Feature (Optional)
|
||||
# Enable PDF file upload to extract text and generate diagrams
|
||||
# Enabled by default. Set to "false" to disable.
|
||||
# ENABLE_PDF_INPUT=true
|
||||
# NEXT_PUBLIC_MAX_EXTRACTED_CHARS=150000 # Max characters for PDF/text extraction (default: 150000)
|
||||
|
||||
@@ -1,7 +1,15 @@
|
||||
import { LangfuseSpanProcessor } from "@langfuse/otel"
|
||||
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node"
|
||||
|
||||
export function register() {
|
||||
// Skip on edge/worker runtime (Cloudflare Workers, Vercel Edge)
|
||||
// OpenTelemetry Node SDK requires Node.js-specific APIs
|
||||
if (
|
||||
typeof process === "undefined" ||
|
||||
!process.versions?.node ||
|
||||
// @ts-expect-error - EdgeRuntime is a global in edge environments
|
||||
typeof EdgeRuntime !== "undefined"
|
||||
) {
|
||||
return
|
||||
}
|
||||
|
||||
// Skip telemetry if Langfuse env vars are not configured
|
||||
if (!process.env.LANGFUSE_PUBLIC_KEY || !process.env.LANGFUSE_SECRET_KEY) {
|
||||
console.warn(
|
||||
@@ -10,12 +18,16 @@ export function register() {
|
||||
return
|
||||
}
|
||||
|
||||
// Dynamic imports to avoid bundling Node.js-specific modules in edge builds
|
||||
const { LangfuseSpanProcessor } = require("@langfuse/otel")
|
||||
const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node")
|
||||
|
||||
const langfuseSpanProcessor = new LangfuseSpanProcessor({
|
||||
publicKey: process.env.LANGFUSE_PUBLIC_KEY,
|
||||
secretKey: process.env.LANGFUSE_SECRET_KEY,
|
||||
baseUrl: process.env.LANGFUSE_BASEURL,
|
||||
// Filter out Next.js HTTP request spans so AI SDK spans become root traces
|
||||
shouldExportSpan: ({ otelSpan }) => {
|
||||
shouldExportSpan: ({ otelSpan }: { otelSpan: { name: string } }) => {
|
||||
const spanName = otelSpan.name
|
||||
// Skip Next.js HTTP infrastructure spans
|
||||
if (
|
||||
|
||||
@@ -1,26 +0,0 @@
|
||||
import { STORAGE_KEYS } from "./storage"
|
||||
|
||||
/**
|
||||
* Get AI configuration from localStorage.
|
||||
* Returns API keys and settings for custom AI providers.
|
||||
* Used to override server defaults when user provides their own API key.
|
||||
*/
|
||||
export function getAIConfig() {
|
||||
if (typeof window === "undefined") {
|
||||
return {
|
||||
accessCode: "",
|
||||
aiProvider: "",
|
||||
aiBaseUrl: "",
|
||||
aiApiKey: "",
|
||||
aiModel: "",
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
accessCode: localStorage.getItem(STORAGE_KEYS.accessCode) || "",
|
||||
aiProvider: localStorage.getItem(STORAGE_KEYS.aiProvider) || "",
|
||||
aiBaseUrl: localStorage.getItem(STORAGE_KEYS.aiBaseUrl) || "",
|
||||
aiApiKey: localStorage.getItem(STORAGE_KEYS.aiApiKey) || "",
|
||||
aiModel: localStorage.getItem(STORAGE_KEYS.aiModel) || "",
|
||||
}
|
||||
}
|
||||
@@ -4,10 +4,16 @@ import { azure, createAzure } from "@ai-sdk/azure"
|
||||
import { createDeepSeek, deepseek } from "@ai-sdk/deepseek"
|
||||
import { createGoogleGenerativeAI, google } from "@ai-sdk/google"
|
||||
import { createOpenAI, openai } from "@ai-sdk/openai"
|
||||
import { fromNodeProviderChain } from "@aws-sdk/credential-providers"
|
||||
import { createOpenRouter } from "@openrouter/ai-sdk-provider"
|
||||
import { createOllama, ollama } from "ollama-ai-provider-v2"
|
||||
|
||||
// Detect if running in edge/worker runtime (Cloudflare Workers, Vercel Edge, etc.)
|
||||
const isEdgeRuntime =
|
||||
typeof process === "undefined" ||
|
||||
!process.versions?.node ||
|
||||
// @ts-expect-error - EdgeRuntime is a global in edge environments
|
||||
typeof EdgeRuntime !== "undefined"
|
||||
|
||||
export type ProviderName =
|
||||
| "bedrock"
|
||||
| "openai"
|
||||
@@ -26,24 +32,6 @@ interface ModelConfig {
|
||||
modelId: string
|
||||
}
|
||||
|
||||
export interface ClientOverrides {
|
||||
provider?: string | null
|
||||
baseUrl?: string | null
|
||||
apiKey?: string | null
|
||||
modelId?: string | null
|
||||
}
|
||||
|
||||
// Providers that can be used with client-provided API keys
|
||||
const ALLOWED_CLIENT_PROVIDERS: ProviderName[] = [
|
||||
"openai",
|
||||
"anthropic",
|
||||
"google",
|
||||
"azure",
|
||||
"openrouter",
|
||||
"deepseek",
|
||||
"siliconflow",
|
||||
]
|
||||
|
||||
// Bedrock provider options for Anthropic beta features
|
||||
const BEDROCK_ANTHROPIC_BETA = {
|
||||
bedrock: {
|
||||
@@ -56,295 +44,6 @@ const ANTHROPIC_BETA_HEADERS = {
|
||||
"anthropic-beta": "fine-grained-tool-streaming-2025-05-14",
|
||||
}
|
||||
|
||||
/**
|
||||
* Safely parse integer from environment variable with validation
|
||||
*/
|
||||
function parseIntSafe(
|
||||
value: string | undefined,
|
||||
varName: string,
|
||||
min?: number,
|
||||
max?: number,
|
||||
): number | undefined {
|
||||
if (!value) return undefined
|
||||
const parsed = Number.parseInt(value, 10)
|
||||
if (Number.isNaN(parsed)) {
|
||||
throw new Error(`${varName} must be a valid integer, got: ${value}`)
|
||||
}
|
||||
if (min !== undefined && parsed < min) {
|
||||
throw new Error(`${varName} must be >= ${min}, got: ${parsed}`)
|
||||
}
|
||||
if (max !== undefined && parsed > max) {
|
||||
throw new Error(`${varName} must be <= ${max}, got: ${parsed}`)
|
||||
}
|
||||
return parsed
|
||||
}
|
||||
|
||||
/**
|
||||
* Build provider-specific options from environment variables
|
||||
* Supports various AI SDK providers with their unique configuration options
|
||||
*
|
||||
* Environment variables:
|
||||
* - OPENAI_REASONING_EFFORT: OpenAI reasoning effort level (minimal/low/medium/high) - for o1/o3/gpt-5
|
||||
* - OPENAI_REASONING_SUMMARY: OpenAI reasoning summary (none/brief/detailed) - auto-enabled for o1/o3/gpt-5
|
||||
* - ANTHROPIC_THINKING_BUDGET_TOKENS: Anthropic thinking budget in tokens (1024-64000)
|
||||
* - ANTHROPIC_THINKING_TYPE: Anthropic thinking type (enabled)
|
||||
* - GOOGLE_THINKING_BUDGET: Google Gemini 2.5 thinking budget in tokens (1024-100000)
|
||||
* - GOOGLE_THINKING_LEVEL: Google Gemini 3 thinking level (low/high)
|
||||
* - AZURE_REASONING_EFFORT: Azure/OpenAI reasoning effort (low/medium/high)
|
||||
* - AZURE_REASONING_SUMMARY: Azure reasoning summary (none/brief/detailed)
|
||||
* - BEDROCK_REASONING_BUDGET_TOKENS: Bedrock Claude reasoning budget in tokens (1024-64000)
|
||||
* - BEDROCK_REASONING_EFFORT: Bedrock Nova reasoning effort (low/medium/high)
|
||||
* - OLLAMA_ENABLE_THINKING: Enable Ollama thinking mode (set to "true")
|
||||
*/
|
||||
function buildProviderOptions(
|
||||
provider: ProviderName,
|
||||
modelId?: string,
|
||||
): Record<string, any> | undefined {
|
||||
const options: Record<string, any> = {}
|
||||
|
||||
switch (provider) {
|
||||
case "openai": {
|
||||
const reasoningEffort = process.env.OPENAI_REASONING_EFFORT
|
||||
const reasoningSummary = process.env.OPENAI_REASONING_SUMMARY
|
||||
|
||||
// OpenAI reasoning models (o1, o3, gpt-5) need reasoningSummary to return thoughts
|
||||
if (
|
||||
modelId &&
|
||||
(modelId.includes("o1") ||
|
||||
modelId.includes("o3") ||
|
||||
modelId.includes("gpt-5"))
|
||||
) {
|
||||
options.openai = {
|
||||
// Auto-enable reasoning summary for reasoning models (default: detailed)
|
||||
reasoningSummary:
|
||||
(reasoningSummary as "none" | "brief" | "detailed") ||
|
||||
"detailed",
|
||||
}
|
||||
|
||||
// Optionally configure reasoning effort
|
||||
if (reasoningEffort) {
|
||||
options.openai.reasoningEffort = reasoningEffort as
|
||||
| "minimal"
|
||||
| "low"
|
||||
| "medium"
|
||||
| "high"
|
||||
}
|
||||
} else if (reasoningEffort || reasoningSummary) {
|
||||
// Non-reasoning models: only apply if explicitly configured
|
||||
options.openai = {}
|
||||
if (reasoningEffort) {
|
||||
options.openai.reasoningEffort = reasoningEffort as
|
||||
| "minimal"
|
||||
| "low"
|
||||
| "medium"
|
||||
| "high"
|
||||
}
|
||||
if (reasoningSummary) {
|
||||
options.openai.reasoningSummary = reasoningSummary as
|
||||
| "none"
|
||||
| "brief"
|
||||
| "detailed"
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
case "anthropic": {
|
||||
const thinkingBudget = parseIntSafe(
|
||||
process.env.ANTHROPIC_THINKING_BUDGET_TOKENS,
|
||||
"ANTHROPIC_THINKING_BUDGET_TOKENS",
|
||||
1024,
|
||||
64000,
|
||||
)
|
||||
const thinkingType =
|
||||
process.env.ANTHROPIC_THINKING_TYPE || "enabled"
|
||||
|
||||
if (thinkingBudget) {
|
||||
options.anthropic = {
|
||||
thinking: {
|
||||
type: thinkingType,
|
||||
budgetTokens: thinkingBudget,
|
||||
},
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
case "google": {
|
||||
const reasoningEffort = process.env.GOOGLE_REASONING_EFFORT
|
||||
const thinkingBudgetVal = parseIntSafe(
|
||||
process.env.GOOGLE_THINKING_BUDGET,
|
||||
"GOOGLE_THINKING_BUDGET",
|
||||
1024,
|
||||
100000,
|
||||
)
|
||||
const thinkingLevel = process.env.GOOGLE_THINKING_LEVEL
|
||||
|
||||
// Google Gemini 2.5/3 models think by default, but need includeThoughts: true
|
||||
// to return the reasoning in the response
|
||||
if (
|
||||
modelId &&
|
||||
(modelId.includes("gemini-2") ||
|
||||
modelId.includes("gemini-3") ||
|
||||
modelId.includes("gemini2") ||
|
||||
modelId.includes("gemini3"))
|
||||
) {
|
||||
const thinkingConfig: Record<string, any> = {
|
||||
includeThoughts: true,
|
||||
}
|
||||
|
||||
// Optionally configure thinking budget or level
|
||||
if (
|
||||
thinkingBudgetVal &&
|
||||
(modelId.includes("2.5") || modelId.includes("2-5"))
|
||||
) {
|
||||
thinkingConfig.thinkingBudget = thinkingBudgetVal
|
||||
} else if (
|
||||
thinkingLevel &&
|
||||
(modelId.includes("gemini-3") ||
|
||||
modelId.includes("gemini3"))
|
||||
) {
|
||||
thinkingConfig.thinkingLevel = thinkingLevel as
|
||||
| "low"
|
||||
| "high"
|
||||
}
|
||||
|
||||
options.google = { thinkingConfig }
|
||||
} else if (reasoningEffort) {
|
||||
options.google = {
|
||||
reasoningEffort: reasoningEffort as
|
||||
| "low"
|
||||
| "medium"
|
||||
| "high",
|
||||
}
|
||||
}
|
||||
|
||||
// Keep existing Google options
|
||||
const options_obj: Record<string, any> = {}
|
||||
const candidateCount = parseIntSafe(
|
||||
process.env.GOOGLE_CANDIDATE_COUNT,
|
||||
"GOOGLE_CANDIDATE_COUNT",
|
||||
1,
|
||||
8,
|
||||
)
|
||||
if (candidateCount) {
|
||||
options_obj.candidateCount = candidateCount
|
||||
}
|
||||
const topK = parseIntSafe(
|
||||
process.env.GOOGLE_TOP_K,
|
||||
"GOOGLE_TOP_K",
|
||||
1,
|
||||
100,
|
||||
)
|
||||
if (topK) {
|
||||
options_obj.topK = topK
|
||||
}
|
||||
if (process.env.GOOGLE_TOP_P) {
|
||||
const topP = Number.parseFloat(process.env.GOOGLE_TOP_P)
|
||||
if (Number.isNaN(topP) || topP < 0 || topP > 1) {
|
||||
throw new Error(
|
||||
`GOOGLE_TOP_P must be a number between 0 and 1, got: ${process.env.GOOGLE_TOP_P}`,
|
||||
)
|
||||
}
|
||||
options_obj.topP = topP
|
||||
}
|
||||
|
||||
if (Object.keys(options_obj).length > 0) {
|
||||
options.google = { ...options.google, ...options_obj }
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
case "azure": {
|
||||
const reasoningEffort = process.env.AZURE_REASONING_EFFORT
|
||||
const reasoningSummary = process.env.AZURE_REASONING_SUMMARY
|
||||
|
||||
if (reasoningEffort || reasoningSummary) {
|
||||
options.azure = {}
|
||||
if (reasoningEffort) {
|
||||
options.azure.reasoningEffort = reasoningEffort as
|
||||
| "low"
|
||||
| "medium"
|
||||
| "high"
|
||||
}
|
||||
if (reasoningSummary) {
|
||||
options.azure.reasoningSummary = reasoningSummary as
|
||||
| "none"
|
||||
| "brief"
|
||||
| "detailed"
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
case "bedrock": {
|
||||
const budgetTokens = parseIntSafe(
|
||||
process.env.BEDROCK_REASONING_BUDGET_TOKENS,
|
||||
"BEDROCK_REASONING_BUDGET_TOKENS",
|
||||
1024,
|
||||
64000,
|
||||
)
|
||||
const reasoningEffort = process.env.BEDROCK_REASONING_EFFORT
|
||||
|
||||
// Bedrock reasoning ONLY for Claude and Nova models
|
||||
// Other models (MiniMax, etc.) don't support reasoningConfig
|
||||
if (
|
||||
modelId &&
|
||||
(budgetTokens || reasoningEffort) &&
|
||||
(modelId.includes("claude") ||
|
||||
modelId.includes("anthropic") ||
|
||||
modelId.includes("nova") ||
|
||||
modelId.includes("amazon"))
|
||||
) {
|
||||
const reasoningConfig: Record<string, any> = { type: "enabled" }
|
||||
|
||||
// Claude models: use budgetTokens (1024-64000)
|
||||
if (
|
||||
budgetTokens &&
|
||||
(modelId.includes("claude") ||
|
||||
modelId.includes("anthropic"))
|
||||
) {
|
||||
reasoningConfig.budgetTokens = budgetTokens
|
||||
}
|
||||
// Nova models: use maxReasoningEffort (low/medium/high)
|
||||
else if (
|
||||
reasoningEffort &&
|
||||
(modelId.includes("nova") || modelId.includes("amazon"))
|
||||
) {
|
||||
reasoningConfig.maxReasoningEffort = reasoningEffort as
|
||||
| "low"
|
||||
| "medium"
|
||||
| "high"
|
||||
}
|
||||
|
||||
options.bedrock = { reasoningConfig }
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
case "ollama": {
|
||||
const enableThinking = process.env.OLLAMA_ENABLE_THINKING
|
||||
// Ollama supports reasoning with think: true for models like qwen3
|
||||
if (enableThinking === "true") {
|
||||
options.ollama = { think: true }
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
case "deepseek":
|
||||
case "openrouter":
|
||||
case "siliconflow": {
|
||||
// These providers don't have reasoning configs in AI SDK yet
|
||||
break
|
||||
}
|
||||
|
||||
default:
|
||||
break
|
||||
}
|
||||
|
||||
return Object.keys(options).length > 0 ? options : undefined
|
||||
}
|
||||
|
||||
// Map of provider to required environment variable
|
||||
const PROVIDER_ENV_VARS: Record<ProviderName, string | null> = {
|
||||
bedrock: null, // AWS SDK auto-uses IAM role on AWS, or env vars locally
|
||||
@@ -371,16 +70,7 @@ function detectProvider(): ProviderName | null {
|
||||
continue
|
||||
}
|
||||
if (process.env[envVar]) {
|
||||
// Azure requires additional config (baseURL or resourceName)
|
||||
if (provider === "azure") {
|
||||
const hasBaseUrl = !!process.env.AZURE_BASE_URL
|
||||
const hasResourceName = !!process.env.AZURE_RESOURCE_NAME
|
||||
if (hasBaseUrl || hasResourceName) {
|
||||
configuredProviders.push(provider as ProviderName)
|
||||
}
|
||||
} else {
|
||||
configuredProviders.push(provider as ProviderName)
|
||||
}
|
||||
configuredProviders.push(provider as ProviderName)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -402,18 +92,6 @@ function validateProviderCredentials(provider: ProviderName): void {
|
||||
`Please set it in your .env.local file.`,
|
||||
)
|
||||
}
|
||||
|
||||
// Azure requires either AZURE_BASE_URL or AZURE_RESOURCE_NAME in addition to API key
|
||||
if (provider === "azure") {
|
||||
const hasBaseUrl = !!process.env.AZURE_BASE_URL
|
||||
const hasResourceName = !!process.env.AZURE_RESOURCE_NAME
|
||||
if (!hasBaseUrl && !hasResourceName) {
|
||||
throw new Error(
|
||||
`Azure requires either AZURE_BASE_URL or AZURE_RESOURCE_NAME to be set. ` +
|
||||
`Please set one in your .env.local file.`,
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -437,39 +115,18 @@ function validateProviderCredentials(provider: ProviderName): void {
|
||||
* - SILICONFLOW_API_KEY: SiliconFlow API key
|
||||
* - SILICONFLOW_BASE_URL: SiliconFlow endpoint (optional, defaults to https://api.siliconflow.com/v1)
|
||||
*/
|
||||
export function getAIModel(overrides?: ClientOverrides): ModelConfig {
|
||||
// Check if client is providing their own provider override
|
||||
const isClientOverride = !!(overrides?.provider && overrides?.apiKey)
|
||||
|
||||
// Use client override if provided, otherwise fall back to env vars
|
||||
const modelId = overrides?.modelId || process.env.AI_MODEL
|
||||
export function getAIModel(): ModelConfig {
|
||||
const modelId = process.env.AI_MODEL
|
||||
|
||||
if (!modelId) {
|
||||
if (isClientOverride) {
|
||||
throw new Error(
|
||||
`Model ID is required when using custom AI provider. Please specify a model in Settings.`,
|
||||
)
|
||||
}
|
||||
throw new Error(
|
||||
`AI_MODEL environment variable is required. Example: AI_MODEL=claude-sonnet-4-5`,
|
||||
)
|
||||
}
|
||||
|
||||
// Determine provider: client override > explicit config > auto-detect > error
|
||||
// Determine provider: explicit config > auto-detect > error
|
||||
let provider: ProviderName
|
||||
if (overrides?.provider) {
|
||||
// Validate client-provided provider
|
||||
if (
|
||||
!ALLOWED_CLIENT_PROVIDERS.includes(
|
||||
overrides.provider as ProviderName,
|
||||
)
|
||||
) {
|
||||
throw new Error(
|
||||
`Invalid provider: ${overrides.provider}. Allowed providers: ${ALLOWED_CLIENT_PROVIDERS.join(", ")}`,
|
||||
)
|
||||
}
|
||||
provider = overrides.provider as ProviderName
|
||||
} else if (process.env.AI_PROVIDER) {
|
||||
if (process.env.AI_PROVIDER) {
|
||||
provider = process.env.AI_PROVIDER as ProviderName
|
||||
} else {
|
||||
const detected = detectProvider()
|
||||
@@ -504,10 +161,8 @@ export function getAIModel(overrides?: ClientOverrides): ModelConfig {
|
||||
}
|
||||
}
|
||||
|
||||
// Only validate server credentials if client isn't providing their own API key
|
||||
if (!isClientOverride) {
|
||||
validateProviderCredentials(provider)
|
||||
}
|
||||
// Validate provider credentials
|
||||
validateProviderCredentials(provider)
|
||||
|
||||
console.log(`[AI Provider] Initializing ${provider} with model: ${modelId}`)
|
||||
|
||||
@@ -515,57 +170,65 @@ export function getAIModel(overrides?: ClientOverrides): ModelConfig {
|
||||
let providerOptions: any
|
||||
let headers: Record<string, string> | undefined
|
||||
|
||||
// Build provider-specific options from environment variables
|
||||
const customProviderOptions = buildProviderOptions(provider, modelId)
|
||||
|
||||
switch (provider) {
|
||||
case "bedrock": {
|
||||
// Use credential provider chain for IAM role support (Lambda, EC2, etc.)
|
||||
// Falls back to env vars (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) for local dev
|
||||
const bedrockProvider = createAmazonBedrock({
|
||||
// Edge runtime (Cloudflare Workers, etc.) requires explicit credentials
|
||||
// Node.js runtime can use IAM role chain (Amplify, Lambda, etc.)
|
||||
const bedrockConfig: Parameters<typeof createAmazonBedrock>[0] = {
|
||||
region: process.env.AWS_REGION || "us-west-2",
|
||||
credentialProvider: fromNodeProviderChain(),
|
||||
})
|
||||
}
|
||||
|
||||
if (isEdgeRuntime) {
|
||||
// Edge runtime: use explicit credentials from env vars
|
||||
if (
|
||||
!process.env.AWS_ACCESS_KEY_ID ||
|
||||
!process.env.AWS_SECRET_ACCESS_KEY
|
||||
) {
|
||||
throw new Error(
|
||||
"AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are required for Bedrock on edge runtime (Cloudflare Workers)",
|
||||
)
|
||||
}
|
||||
bedrockConfig.accessKeyId = process.env.AWS_ACCESS_KEY_ID
|
||||
bedrockConfig.secretAccessKey =
|
||||
process.env.AWS_SECRET_ACCESS_KEY
|
||||
if (process.env.AWS_SESSION_TOKEN) {
|
||||
bedrockConfig.sessionToken = process.env.AWS_SESSION_TOKEN
|
||||
}
|
||||
} else {
|
||||
// Node.js runtime: use credential provider chain for IAM role support
|
||||
const {
|
||||
fromNodeProviderChain,
|
||||
} = require("@aws-sdk/credential-providers")
|
||||
bedrockConfig.credentialProvider = fromNodeProviderChain()
|
||||
}
|
||||
|
||||
const bedrockProvider = createAmazonBedrock(bedrockConfig)
|
||||
model = bedrockProvider(modelId)
|
||||
// Add Anthropic beta options if using Claude models via Bedrock
|
||||
if (modelId.includes("anthropic.claude")) {
|
||||
// Deep merge to preserve both anthropicBeta and reasoningConfig
|
||||
providerOptions = {
|
||||
bedrock: {
|
||||
...BEDROCK_ANTHROPIC_BETA.bedrock,
|
||||
...(customProviderOptions?.bedrock || {}),
|
||||
},
|
||||
}
|
||||
} else if (customProviderOptions) {
|
||||
providerOptions = customProviderOptions
|
||||
providerOptions = BEDROCK_ANTHROPIC_BETA
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
case "openai": {
|
||||
const apiKey = overrides?.apiKey || process.env.OPENAI_API_KEY
|
||||
const baseURL = overrides?.baseUrl || process.env.OPENAI_BASE_URL
|
||||
if (baseURL || overrides?.apiKey) {
|
||||
case "openai":
|
||||
if (process.env.OPENAI_BASE_URL) {
|
||||
const customOpenAI = createOpenAI({
|
||||
apiKey,
|
||||
...(baseURL && { baseURL }),
|
||||
apiKey: process.env.OPENAI_API_KEY,
|
||||
baseURL: process.env.OPENAI_BASE_URL,
|
||||
})
|
||||
model = customOpenAI.chat(modelId)
|
||||
} else {
|
||||
model = openai(modelId)
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
case "anthropic": {
|
||||
const apiKey = overrides?.apiKey || process.env.ANTHROPIC_API_KEY
|
||||
const baseURL =
|
||||
overrides?.baseUrl ||
|
||||
process.env.ANTHROPIC_BASE_URL ||
|
||||
"https://api.anthropic.com/v1"
|
||||
const customProvider = createAnthropic({
|
||||
apiKey,
|
||||
baseURL,
|
||||
apiKey: process.env.ANTHROPIC_API_KEY,
|
||||
baseURL:
|
||||
process.env.ANTHROPIC_BASE_URL ||
|
||||
"https://api.anthropic.com/v1",
|
||||
headers: ANTHROPIC_BETA_HEADERS,
|
||||
})
|
||||
model = customProvider(modelId)
|
||||
@@ -574,41 +237,29 @@ export function getAIModel(overrides?: ClientOverrides): ModelConfig {
|
||||
break
|
||||
}
|
||||
|
||||
case "google": {
|
||||
const apiKey =
|
||||
overrides?.apiKey || process.env.GOOGLE_GENERATIVE_AI_API_KEY
|
||||
const baseURL = overrides?.baseUrl || process.env.GOOGLE_BASE_URL
|
||||
if (baseURL || overrides?.apiKey) {
|
||||
case "google":
|
||||
if (process.env.GOOGLE_BASE_URL) {
|
||||
const customGoogle = createGoogleGenerativeAI({
|
||||
apiKey,
|
||||
...(baseURL && { baseURL }),
|
||||
apiKey: process.env.GOOGLE_GENERATIVE_AI_API_KEY,
|
||||
baseURL: process.env.GOOGLE_BASE_URL,
|
||||
})
|
||||
model = customGoogle(modelId)
|
||||
} else {
|
||||
model = google(modelId)
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
case "azure": {
|
||||
const apiKey = overrides?.apiKey || process.env.AZURE_API_KEY
|
||||
const baseURL = overrides?.baseUrl || process.env.AZURE_BASE_URL
|
||||
const resourceName = process.env.AZURE_RESOURCE_NAME
|
||||
// Azure requires either baseURL or resourceName to construct the endpoint
|
||||
// resourceName constructs: https://{resourceName}.openai.azure.com/openai/v1{path}
|
||||
if (baseURL || resourceName || overrides?.apiKey) {
|
||||
case "azure":
|
||||
if (process.env.AZURE_BASE_URL) {
|
||||
const customAzure = createAzure({
|
||||
apiKey,
|
||||
// baseURL takes precedence over resourceName per SDK behavior
|
||||
...(baseURL && { baseURL }),
|
||||
...(!baseURL && resourceName && { resourceName }),
|
||||
apiKey: process.env.AZURE_API_KEY,
|
||||
baseURL: process.env.AZURE_BASE_URL,
|
||||
})
|
||||
model = customAzure(modelId)
|
||||
} else {
|
||||
model = azure(modelId)
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
case "ollama":
|
||||
if (process.env.OLLAMA_BASE_URL) {
|
||||
@@ -622,41 +273,34 @@ export function getAIModel(overrides?: ClientOverrides): ModelConfig {
|
||||
break
|
||||
|
||||
case "openrouter": {
|
||||
const apiKey = overrides?.apiKey || process.env.OPENROUTER_API_KEY
|
||||
const baseURL =
|
||||
overrides?.baseUrl || process.env.OPENROUTER_BASE_URL
|
||||
const openrouter = createOpenRouter({
|
||||
apiKey,
|
||||
...(baseURL && { baseURL }),
|
||||
apiKey: process.env.OPENROUTER_API_KEY,
|
||||
...(process.env.OPENROUTER_BASE_URL && {
|
||||
baseURL: process.env.OPENROUTER_BASE_URL,
|
||||
}),
|
||||
})
|
||||
model = openrouter(modelId)
|
||||
break
|
||||
}
|
||||
|
||||
case "deepseek": {
|
||||
const apiKey = overrides?.apiKey || process.env.DEEPSEEK_API_KEY
|
||||
const baseURL = overrides?.baseUrl || process.env.DEEPSEEK_BASE_URL
|
||||
if (baseURL || overrides?.apiKey) {
|
||||
case "deepseek":
|
||||
if (process.env.DEEPSEEK_BASE_URL) {
|
||||
const customDeepSeek = createDeepSeek({
|
||||
apiKey,
|
||||
...(baseURL && { baseURL }),
|
||||
apiKey: process.env.DEEPSEEK_API_KEY,
|
||||
baseURL: process.env.DEEPSEEK_BASE_URL,
|
||||
})
|
||||
model = customDeepSeek(modelId)
|
||||
} else {
|
||||
model = deepseek(modelId)
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
case "siliconflow": {
|
||||
const apiKey = overrides?.apiKey || process.env.SILICONFLOW_API_KEY
|
||||
const baseURL =
|
||||
overrides?.baseUrl ||
|
||||
process.env.SILICONFLOW_BASE_URL ||
|
||||
"https://api.siliconflow.com/v1"
|
||||
const siliconflowProvider = createOpenAI({
|
||||
apiKey,
|
||||
baseURL,
|
||||
apiKey: process.env.SILICONFLOW_API_KEY,
|
||||
baseURL:
|
||||
process.env.SILICONFLOW_BASE_URL ||
|
||||
"https://api.siliconflow.com/v1",
|
||||
})
|
||||
model = siliconflowProvider.chat(modelId)
|
||||
break
|
||||
@@ -668,24 +312,5 @@ export function getAIModel(overrides?: ClientOverrides): ModelConfig {
|
||||
)
|
||||
}
|
||||
|
||||
// Apply provider-specific options for all providers except bedrock (which has special handling)
|
||||
if (customProviderOptions && provider !== "bedrock" && !providerOptions) {
|
||||
providerOptions = customProviderOptions
|
||||
}
|
||||
|
||||
return { model, providerOptions, headers, modelId }
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a model supports prompt caching.
|
||||
* Currently only Claude models on Bedrock support prompt caching.
|
||||
*/
|
||||
export function supportsPromptCaching(modelId: string): boolean {
|
||||
// Bedrock prompt caching is supported for Claude models
|
||||
return (
|
||||
modelId.includes("claude") ||
|
||||
modelId.includes("anthropic") ||
|
||||
modelId.startsWith("us.anthropic") ||
|
||||
modelId.startsWith("eu.anthropic")
|
||||
)
|
||||
}
|
||||
|
||||
@@ -394,366 +394,6 @@ export const CACHED_EXAMPLE_RESPONSES: CachedResponse[] = [
|
||||
</mxCell>
|
||||
</root>`,
|
||||
},
|
||||
{
|
||||
promptText: "Summarize this paper as a diagram",
|
||||
hasImage: true,
|
||||
xml: ` <root>
|
||||
<mxCell id="0" />
|
||||
<mxCell id="1" parent="0" />
|
||||
<mxCell id="title_bg" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#1a237e;strokeColor=none;arcSize=8;"
|
||||
value="" vertex="1">
|
||||
<mxGeometry height="80" width="720" x="40" y="20" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="title" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=22;fontStyle=1;fontColor=#FFFFFF;"
|
||||
value="Chain-of-Thought Prompting<br><font style="font-size: 14px;">Elicits Reasoning in Large Language Models</font>"
|
||||
vertex="1">
|
||||
<mxGeometry height="70" width="720" x="40" y="25" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="authors" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=11;fontColor=#666666;"
|
||||
value="Wei et al. (Google Research, Brain Team) | NeurIPS 2022" vertex="1">
|
||||
<mxGeometry height="20" width="720" x="40" y="100" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="core_header" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=16;fontStyle=1;fontColor=#1a237e;"
|
||||
value="💡 Core Idea" vertex="1">
|
||||
<mxGeometry height="30" width="150" x="40" y="125" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="core_box" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#E3F2FD;strokeColor=#1565C0;align=left;spacingLeft=10;spacingRight=10;fontSize=11;"
|
||||
value="<b>Chain of Thought</b> = A series of intermediate reasoning steps that lead to the final answer<br><br>Simply provide a few CoT demonstrations as exemplars in few-shot prompting"
|
||||
vertex="1">
|
||||
<mxGeometry height="75" width="340" x="40" y="155" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="compare_header" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=16;fontStyle=1;fontColor=#1a237e;"
|
||||
value="⚖️ Standard vs Chain-of-Thought Prompting" vertex="1">
|
||||
<mxGeometry height="30" width="350" x="40" y="240" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="std_box" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#FFEBEE;strokeColor=#C62828;arcSize=8;"
|
||||
value="" vertex="1">
|
||||
<mxGeometry height="160" width="170" x="40" y="275" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="std_title" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=12;fontStyle=1;fontColor=#C62828;"
|
||||
value="Standard Prompting" vertex="1">
|
||||
<mxGeometry height="25" width="170" x="40" y="280" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="std_q" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=top;whiteSpace=wrap;rounded=0;fontSize=9;spacingLeft=5;spacingRight=5;"
|
||||
value="Q: Roger has 5 tennis balls. He buys 2 more cans. Each can has 3 balls. How many now?"
|
||||
vertex="1">
|
||||
<mxGeometry height="55" width="160" x="45" y="305" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="std_a" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=#FFCDD2;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=1;fontSize=10;fontStyle=1;spacingLeft=5;"
|
||||
value="A: The answer is 11." vertex="1">
|
||||
<mxGeometry height="25" width="150" x="50" y="365" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="std_result" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=11;fontStyle=1;fontColor=#C62828;"
|
||||
value="❌ Often Wrong" vertex="1">
|
||||
<mxGeometry height="30" width="170" x="40" y="400" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="cot_box" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#E8F5E9;strokeColor=#2E7D32;arcSize=8;"
|
||||
value="" vertex="1">
|
||||
<mxGeometry height="160" width="170" x="220" y="275" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="cot_title" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=12;fontStyle=1;fontColor=#2E7D32;"
|
||||
value="Chain-of-Thought" vertex="1">
|
||||
<mxGeometry height="25" width="170" x="220" y="280" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="cot_q" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=top;whiteSpace=wrap;rounded=0;fontSize=9;spacingLeft=5;spacingRight=5;"
|
||||
value="Q: Roger has 5 tennis balls. He buys 2 more cans. Each can has 3 balls. How many now?"
|
||||
vertex="1">
|
||||
<mxGeometry height="55" width="160" x="225" y="305" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="cot_a" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=#C8E6C9;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=1;fontSize=9;fontStyle=1;spacingLeft=5;"
|
||||
value="A: 2 cans × 3 = 6 balls.<br>5 + 6 = 11. Answer: 11" vertex="1">
|
||||
<mxGeometry height="35" width="150" x="230" y="360" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="cot_result" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=11;fontStyle=1;fontColor=#2E7D32;"
|
||||
value="✓ Correct!" vertex="1">
|
||||
<mxGeometry height="30" width="170" x="220" y="400" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="vs_arrow" edge="1" parent="1"
|
||||
style="shape=flexArrow;endArrow=classic;startArrow=classic;html=1;fillColor=#FFC107;strokeColor=none;width=8;endSize=4;startSize=4;"
|
||||
value="">
|
||||
<mxGeometry relative="1" width="100" as="geometry">
|
||||
<mxPoint x="195" y="355" as="sourcePoint" />
|
||||
<mxPoint x="235" y="355" as="targetPoint" />
|
||||
</mxGeometry>
|
||||
</mxCell>
|
||||
<mxCell id="props_header" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=16;fontStyle=1;fontColor=#1a237e;"
|
||||
value="🔑 Key Properties" vertex="1">
|
||||
<mxGeometry height="30" width="150" x="400" y="125" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="prop1" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#FFF3E0;strokeColor=#EF6C00;fontSize=10;align=left;spacingLeft=8;"
|
||||
value="1️⃣ Decomposes multi-step problems" vertex="1">
|
||||
<mxGeometry height="32" width="180" x="400" y="155" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="prop2" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#FFF3E0;strokeColor=#EF6C00;fontSize=10;align=left;spacingLeft=8;"
|
||||
value="2️⃣ Interpretable reasoning window" vertex="1">
|
||||
<mxGeometry height="32" width="180" x="400" y="192" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="prop3" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#FFF3E0;strokeColor=#EF6C00;fontSize=10;align=left;spacingLeft=8;"
|
||||
value="3️⃣ Applicable to any language task" vertex="1">
|
||||
<mxGeometry height="32" width="180" x="400" y="229" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="prop4" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#FFF3E0;strokeColor=#EF6C00;fontSize=10;align=left;spacingLeft=8;"
|
||||
value="4️⃣ No finetuning required" vertex="1">
|
||||
<mxGeometry height="32" width="180" x="400" y="266" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="emergent_header" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=16;fontStyle=1;fontColor=#1a237e;"
|
||||
value="📈 Emergent Ability" vertex="1">
|
||||
<mxGeometry height="30" width="180" x="400" y="310" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="emergent_box" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#F3E5F5;strokeColor=#7B1FA2;arcSize=8;"
|
||||
value="" vertex="1">
|
||||
<mxGeometry height="95" width="180" x="400" y="340" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="emergent_text" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=11;"
|
||||
value="CoT only works with<br><b>~100B+ parameters</b><br><br>Small models produce<br>fluent but illogical chains"
|
||||
vertex="1">
|
||||
<mxGeometry height="85" width="180" x="400" y="345" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="results_header" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=16;fontStyle=1;fontColor=#1a237e;"
|
||||
value="📊 Key Results" vertex="1">
|
||||
<mxGeometry height="30" width="150" x="600" y="125" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="gsm_box" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#E8F5E9;strokeColor=#2E7D32;arcSize=8;"
|
||||
value="" vertex="1">
|
||||
<mxGeometry height="100" width="160" x="600" y="155" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="gsm_title" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=12;fontStyle=1;fontColor=#2E7D32;"
|
||||
value="GSM8K (Math)" vertex="1">
|
||||
<mxGeometry height="20" width="160" x="600" y="160" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="gsm_bar1" parent="1"
|
||||
style="rounded=0;whiteSpace=wrap;html=1;fillColor=#FFCDD2;strokeColor=none;"
|
||||
value="" vertex="1">
|
||||
<mxGeometry height="30" width="40" x="615" y="185" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="gsm_bar2" parent="1"
|
||||
style="rounded=0;whiteSpace=wrap;html=1;fillColor=#4CAF50;strokeColor=none;"
|
||||
value="" vertex="1">
|
||||
<mxGeometry height="30" width="80" x="665" y="185" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="gsm_label1" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;fontStyle=1;"
|
||||
value="18%" vertex="1">
|
||||
<mxGeometry height="15" width="40" x="615" y="215" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="gsm_label2" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;fontStyle=1;fontColor=#2E7D32;"
|
||||
value="57%" vertex="1">
|
||||
<mxGeometry height="15" width="80" x="665" y="215" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="gsm_legend" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=9;fontColor=#666666;"
|
||||
value="Standard → CoT (PaLM 540B)" vertex="1">
|
||||
<mxGeometry height="20" width="160" x="600" y="232" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="bench_header" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=16;fontStyle=1;fontColor=#1a237e;"
|
||||
value="🧪 Benchmarks Tested" vertex="1">
|
||||
<mxGeometry height="30" width="180" x="600" y="265" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="bench_arith" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#E3F2FD;strokeColor=#1565C0;fontSize=10;align=center;"
|
||||
value="🔢 Arithmetic<br><font style="font-size: 9px;">GSM8K, SVAMP, ASDiv, AQuA, MAWPS</font>"
|
||||
vertex="1">
|
||||
<mxGeometry height="45" width="160" x="600" y="295" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="bench_common" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#E3F2FD;strokeColor=#1565C0;fontSize=10;align=center;"
|
||||
value="🧠 Commonsense<br><font style="font-size: 9px;">CSQA, StrategyQA, Date, Sports, SayCan</font>"
|
||||
vertex="1">
|
||||
<mxGeometry height="45" width="160" x="600" y="345" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="bench_symbol" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#E3F2FD;strokeColor=#1565C0;fontSize=10;align=center;"
|
||||
value="🔣 Symbolic<br><font style="font-size: 9px;">Last Letter Concat, Coin Flip</font>"
|
||||
vertex="1">
|
||||
<mxGeometry height="40" width="160" x="600" y="395" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="task_header" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=16;fontStyle=1;fontColor=#1a237e;"
|
||||
value="🎯 Task Types & Results" vertex="1">
|
||||
<mxGeometry height="30" width="200" x="40" y="445" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="task_arith" parent="1"
|
||||
style="ellipse;whiteSpace=wrap;html=1;fillColor=#BBDEFB;strokeColor=#1565C0;fontSize=11;fontStyle=1;"
|
||||
value="Arithmetic<br>Reasoning" vertex="1">
|
||||
<mxGeometry height="60" width="90" x="40" y="480" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="task_arith_res" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=top;whiteSpace=wrap;rounded=0;fontSize=9;fontColor=#1565C0;"
|
||||
value="SOTA on GSM8K<br>(57% vs 55% prior)" vertex="1">
|
||||
<mxGeometry height="30" width="110" x="30" y="540" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="task_common" parent="1"
|
||||
style="ellipse;whiteSpace=wrap;html=1;fillColor=#C8E6C9;strokeColor=#2E7D32;fontSize=11;fontStyle=1;"
|
||||
value="Commonsense<br>Reasoning" vertex="1">
|
||||
<mxGeometry height="60" width="90" x="160" y="480" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="task_common_res" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=top;whiteSpace=wrap;rounded=0;fontSize=9;fontColor=#2E7D32;"
|
||||
value="SOTA StrategyQA<br>(75.6% vs 69.4%)" vertex="1">
|
||||
<mxGeometry height="30" width="110" x="150" y="540" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="task_symbol" parent="1"
|
||||
style="ellipse;whiteSpace=wrap;html=1;fillColor=#FFE0B2;strokeColor=#EF6C00;fontSize=11;fontStyle=1;"
|
||||
value="Symbolic<br>Reasoning" vertex="1">
|
||||
<mxGeometry height="60" width="90" x="280" y="480" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="task_symbol_res" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=top;whiteSpace=wrap;rounded=0;fontSize=9;fontColor=#EF6C00;"
|
||||
value="OOD Generalization<br>to longer sequences" vertex="1">
|
||||
<mxGeometry height="30" width="110" x="270" y="540" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="task_arrow1" edge="1" parent="1"
|
||||
style="endArrow=classic;html=1;strokeColor=#9E9E9E;strokeWidth=2;" value="">
|
||||
<mxGeometry height="50" relative="1" width="50" as="geometry">
|
||||
<mxPoint x="130" y="510" as="sourcePoint" />
|
||||
<mxPoint x="160" y="510" as="targetPoint" />
|
||||
</mxGeometry>
|
||||
</mxCell>
|
||||
<mxCell id="task_arrow2" edge="1" parent="1"
|
||||
style="endArrow=classic;html=1;strokeColor=#9E9E9E;strokeWidth=2;" value="">
|
||||
<mxGeometry height="50" relative="1" width="50" as="geometry">
|
||||
<mxPoint x="250" y="510" as="sourcePoint" />
|
||||
<mxPoint x="280" y="510" as="targetPoint" />
|
||||
</mxGeometry>
|
||||
</mxCell>
|
||||
<mxCell id="models_header" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=16;fontStyle=1;fontColor=#1a237e;"
|
||||
value="🤖 Models Tested" vertex="1">
|
||||
<mxGeometry height="30" width="150" x="400" y="445" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="models_box" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#ECEFF1;strokeColor=#607D8B;arcSize=8;"
|
||||
value="" vertex="1">
|
||||
<mxGeometry height="95" width="180" x="400" y="475" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="model1" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=11;spacingLeft=10;"
|
||||
value="• GPT-3 (175B)" vertex="1">
|
||||
<mxGeometry height="20" width="90" x="400" y="480" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="model2" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=11;spacingLeft=10;"
|
||||
value="• LaMDA (137B)" vertex="1">
|
||||
<mxGeometry height="20" width="90" x="400" y="500" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="model3" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=11;spacingLeft=10;"
|
||||
value="• PaLM (540B)" vertex="1">
|
||||
<mxGeometry height="20" width="90" x="400" y="520" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="model4" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=11;spacingLeft=10;"
|
||||
value="• Codex" vertex="1">
|
||||
<mxGeometry height="20" width="80" x="490" y="480" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="model5" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=11;spacingLeft=10;"
|
||||
value="• UL2 (20B)" vertex="1">
|
||||
<mxGeometry height="20" width="80" x="490" y="500" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="model_note" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;fontStyle=2;fontColor=#607D8B;"
|
||||
value="No finetuning - prompting only!" vertex="1">
|
||||
<mxGeometry height="20" width="180" x="400" y="545" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="takeaway_header" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=16;fontStyle=1;fontColor=#1a237e;"
|
||||
value="✨ Key Takeaways" vertex="1">
|
||||
<mxGeometry height="30" width="160" x="600" y="445" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="takeaway_box" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#FFF8E1;strokeColor=#FFA000;arcSize=8;"
|
||||
value="" vertex="1">
|
||||
<mxGeometry height="95" width="160" x="600" y="475" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="take1" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;spacingLeft=5;"
|
||||
value="✓ Simple yet powerful" vertex="1">
|
||||
<mxGeometry height="18" width="150" x="605" y="480" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="take2" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;spacingLeft=5;"
|
||||
value="✓ Emergent at scale" vertex="1">
|
||||
<mxGeometry height="18" width="150" x="605" y="498" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="take3" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;spacingLeft=5;"
|
||||
value="✓ Broadly applicable" vertex="1">
|
||||
<mxGeometry height="18" width="150" x="605" y="516" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="take4" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;spacingLeft=5;"
|
||||
value="✓ No training needed" vertex="1">
|
||||
<mxGeometry height="18" width="150" x="605" y="534" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="take5" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;spacingLeft=5;"
|
||||
value="✓ State-of-the-art results" vertex="1">
|
||||
<mxGeometry height="18" width="150" x="605" y="552" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="format_header" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=14;fontStyle=1;fontColor=#1a237e;"
|
||||
value="📝 Prompt Format" vertex="1">
|
||||
<mxGeometry height="25" width="150" x="40" y="575" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="format_box" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#E1BEE7;strokeColor=#7B1FA2;fontSize=12;fontStyle=1;"
|
||||
value="〈 Input, Chain of Thought, Output 〉" vertex="1">
|
||||
<mxGeometry height="35" width="250" x="40" y="600" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="limit_header" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=14;fontStyle=1;fontColor=#1a237e;"
|
||||
value="⚠️ Limitations" vertex="1">
|
||||
<mxGeometry height="25" width="120" x="310" y="575" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="limit_box" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#FFEBEE;strokeColor=#C62828;fontSize=10;align=left;spacingLeft=8;"
|
||||
value="• Requires large models (~100B+)<br>• No guarantee of correct reasoning<br>• Costly to serve in production"
|
||||
vertex="1">
|
||||
<mxGeometry height="55" width="200" x="310" y="600" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="impact_header" parent="1"
|
||||
style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=14;fontStyle=1;fontColor=#1a237e;"
|
||||
value="🚀 Impact" vertex="1">
|
||||
<mxGeometry height="25" width="100" x="530" y="575" as="geometry" />
|
||||
</mxCell>
|
||||
<mxCell id="impact_box" parent="1"
|
||||
style="rounded=1;whiteSpace=wrap;html=1;fillColor=#E8F5E9;strokeColor=#2E7D32;fontSize=10;align=left;spacingLeft=8;spacingRight=8;"
|
||||
value="Foundational technique for modern LLM reasoning - inspired many follow-up works including Self-Consistency, Tree-of-Thought, etc."
|
||||
vertex="1">
|
||||
<mxGeometry height="55" width="230" x="530" y="600" as="geometry" />
|
||||
</mxCell>
|
||||
</root>`,
|
||||
},
|
||||
{
|
||||
promptText: "Draw a cat for me",
|
||||
hasImage: false,
|
||||
|
||||
@@ -1,75 +0,0 @@
|
||||
import { extractText, getDocumentProxy } from "unpdf"
|
||||
|
||||
// Maximum characters allowed for extracted text (configurable via env)
|
||||
const DEFAULT_MAX_EXTRACTED_CHARS = 150000 // 150k chars
|
||||
export const MAX_EXTRACTED_CHARS =
|
||||
Number(process.env.NEXT_PUBLIC_MAX_EXTRACTED_CHARS) ||
|
||||
DEFAULT_MAX_EXTRACTED_CHARS
|
||||
|
||||
// Text file extensions we support
|
||||
const TEXT_EXTENSIONS = [
|
||||
".txt",
|
||||
".md",
|
||||
".markdown",
|
||||
".json",
|
||||
".csv",
|
||||
".xml",
|
||||
".html",
|
||||
".css",
|
||||
".js",
|
||||
".ts",
|
||||
".jsx",
|
||||
".tsx",
|
||||
".py",
|
||||
".java",
|
||||
".c",
|
||||
".cpp",
|
||||
".h",
|
||||
".go",
|
||||
".rs",
|
||||
".yaml",
|
||||
".yml",
|
||||
".toml",
|
||||
".ini",
|
||||
".log",
|
||||
".sh",
|
||||
".bash",
|
||||
".zsh",
|
||||
]
|
||||
|
||||
/**
|
||||
* Extract text content from a PDF file
|
||||
* Uses unpdf library for client-side extraction
|
||||
*/
|
||||
export async function extractPdfText(file: File): Promise<string> {
|
||||
const buffer = await file.arrayBuffer()
|
||||
const pdf = await getDocumentProxy(new Uint8Array(buffer))
|
||||
const { text } = await extractText(pdf, { mergePages: true })
|
||||
return text as string
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a file is a PDF
|
||||
*/
|
||||
export function isPdfFile(file: File): boolean {
|
||||
return file.type === "application/pdf" || file.name.endsWith(".pdf")
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a file is a text file
|
||||
*/
|
||||
export function isTextFile(file: File): boolean {
|
||||
const name = file.name.toLowerCase()
|
||||
return (
|
||||
file.type.startsWith("text/") ||
|
||||
file.type === "application/json" ||
|
||||
TEXT_EXTENSIONS.some((ext) => name.endsWith(ext))
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract text content from a text file
|
||||
*/
|
||||
export async function extractTextFileContent(file: File): Promise<string> {
|
||||
return await file.text()
|
||||
}
|
||||
@@ -1,27 +0,0 @@
|
||||
// Centralized localStorage keys
|
||||
// Consolidates all storage keys from chat-panel.tsx and settings-dialog.tsx
|
||||
|
||||
export const STORAGE_KEYS = {
|
||||
// Chat data
|
||||
messages: "next-ai-draw-io-messages",
|
||||
xmlSnapshots: "next-ai-draw-io-xml-snapshots",
|
||||
diagramXml: "next-ai-draw-io-diagram-xml",
|
||||
sessionId: "next-ai-draw-io-session-id",
|
||||
|
||||
// Quota tracking
|
||||
requestCount: "next-ai-draw-io-request-count",
|
||||
requestDate: "next-ai-draw-io-request-date",
|
||||
tokenCount: "next-ai-draw-io-token-count",
|
||||
tokenDate: "next-ai-draw-io-token-date",
|
||||
tpmCount: "next-ai-draw-io-tpm-count",
|
||||
tpmMinute: "next-ai-draw-io-tpm-minute",
|
||||
|
||||
// Settings
|
||||
accessCode: "next-ai-draw-io-access-code",
|
||||
closeProtection: "next-ai-draw-io-close-protection",
|
||||
accessCodeRequired: "next-ai-draw-io-access-code-required",
|
||||
aiProvider: "next-ai-draw-io-ai-provider",
|
||||
aiBaseUrl: "next-ai-draw-io-ai-base-url",
|
||||
aiApiKey: "next-ai-draw-io-ai-api-key",
|
||||
aiModel: "next-ai-draw-io-ai-model",
|
||||
} as const
|
||||
@@ -10,10 +10,10 @@
|
||||
export const DEFAULT_SYSTEM_PROMPT = `
|
||||
You are an expert diagram creation assistant specializing in draw.io XML generation.
|
||||
Your primary function is chat with user and crafting clear, well-organized visual diagrams through precise XML specifications.
|
||||
You can see images that users upload, and you can read the text content extracted from PDF documents they upload.
|
||||
You can see the image that user uploaded.
|
||||
|
||||
When you are asked to create a diagram, briefly describe your plan about the layout and structure to avoid object overlapping or edge cross the objects. (2-3 sentences max), then use display_diagram tool to generate the XML.
|
||||
After generating or editing a diagram, you don't need to say anything. The user can see the diagram - no need to describe it.
|
||||
When you are asked to create a diagram, you must first tell user you plan in text first. Plan the layout and structure that can avoid object overlapping or edge cross the objects.
|
||||
Then use display_diagram tool to generate the full draw.io XML for the entire diagram.
|
||||
|
||||
## App Context
|
||||
You are an AI agent (powered by {{MODEL_NAME}}) inside a web app. The interface has:
|
||||
@@ -25,7 +25,7 @@ You can read and modify diagrams by generating draw.io XML code through tool cal
|
||||
## App Features
|
||||
1. **Diagram History** (clock icon, bottom-left of chat input): The app automatically saves a snapshot before each AI edit. Users can view the history panel and restore any previous version. Feel free to make changes - nothing is permanently lost.
|
||||
2. **Theme Toggle** (palette icon, bottom-left of chat input): Users can switch between minimal UI and sketch-style UI for the draw.io editor.
|
||||
3. **Image/PDF Upload** (paperclip icon, bottom-left of chat input): Users can upload images or PDF documents for you to analyze and generate diagrams from.
|
||||
3. **Image Upload** (paperclip icon, bottom-left of chat input): Users can upload images for you to analyze and replicate as diagrams.
|
||||
4. **Export** (via draw.io toolbar): Users can save diagrams as .drawio, .svg, or .png files.
|
||||
5. **Clear Chat** (trash icon, bottom-right of chat input): Clears the conversation and resets the diagram.
|
||||
|
||||
|
||||
@@ -1,22 +1,21 @@
|
||||
/**
|
||||
* Token counting utilities using js-tiktoken
|
||||
* Token counting utilities using Anthropic's tokenizer
|
||||
*
|
||||
* Uses cl100k_base encoding (GPT-4) which is close to Claude's tokenization.
|
||||
* This is a pure JavaScript implementation, no WASM required.
|
||||
* This file is separate from system-prompts.ts because the @anthropic-ai/tokenizer
|
||||
* package uses WebAssembly which doesn't work well with Next.js server-side rendering.
|
||||
* Import this file only in scripts or client-side code, not in API routes.
|
||||
*/
|
||||
|
||||
import { encodingForModel } from "js-tiktoken"
|
||||
import { countTokens } from "@anthropic-ai/tokenizer"
|
||||
import { DEFAULT_SYSTEM_PROMPT, EXTENDED_SYSTEM_PROMPT } from "./system-prompts"
|
||||
|
||||
const encoder = encodingForModel("gpt-4o")
|
||||
|
||||
/**
|
||||
* Count the number of tokens in a text string
|
||||
* Count the number of tokens in a text string using Anthropic's tokenizer
|
||||
* @param text - The text to count tokens for
|
||||
* @returns The number of tokens
|
||||
*/
|
||||
export function countTextTokens(text: string): number {
|
||||
return encoder.encode(text).length
|
||||
return countTokens(text)
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -29,8 +28,8 @@ export function getSystemPromptTokenCounts(): {
|
||||
extended: number
|
||||
additions: number
|
||||
} {
|
||||
const defaultTokens = countTextTokens(DEFAULT_SYSTEM_PROMPT)
|
||||
const extendedTokens = countTextTokens(EXTENDED_SYSTEM_PROMPT)
|
||||
const defaultTokens = countTokens(DEFAULT_SYSTEM_PROMPT)
|
||||
const extendedTokens = countTokens(EXTENDED_SYSTEM_PROMPT)
|
||||
return {
|
||||
default: defaultTokens,
|
||||
extended: extendedTokens,
|
||||
|
||||
@@ -1,110 +0,0 @@
|
||||
"use client"
|
||||
|
||||
import { useState } from "react"
|
||||
import { toast } from "sonner"
|
||||
import {
|
||||
extractPdfText,
|
||||
extractTextFileContent,
|
||||
isPdfFile,
|
||||
isTextFile,
|
||||
MAX_EXTRACTED_CHARS,
|
||||
} from "@/lib/pdf-utils"
|
||||
|
||||
export interface FileData {
|
||||
text: string
|
||||
charCount: number
|
||||
isExtracting: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Hook for processing file uploads, especially PDFs and text files.
|
||||
* Handles text extraction, character limit validation, and cleanup.
|
||||
*/
|
||||
export function useFileProcessor() {
|
||||
const [files, setFiles] = useState<File[]>([])
|
||||
const [pdfData, setPdfData] = useState<Map<File, FileData>>(new Map())
|
||||
|
||||
const handleFileChange = async (newFiles: File[]) => {
|
||||
setFiles(newFiles)
|
||||
|
||||
// Extract text immediately for new PDF/text files
|
||||
for (const file of newFiles) {
|
||||
const needsExtraction =
|
||||
(isPdfFile(file) || isTextFile(file)) && !pdfData.has(file)
|
||||
if (needsExtraction) {
|
||||
// Mark as extracting
|
||||
setPdfData((prev) => {
|
||||
const next = new Map(prev)
|
||||
next.set(file, {
|
||||
text: "",
|
||||
charCount: 0,
|
||||
isExtracting: true,
|
||||
})
|
||||
return next
|
||||
})
|
||||
|
||||
// Extract text asynchronously
|
||||
try {
|
||||
let text: string
|
||||
if (isPdfFile(file)) {
|
||||
text = await extractPdfText(file)
|
||||
} else {
|
||||
text = await extractTextFileContent(file)
|
||||
}
|
||||
|
||||
// Check character limit
|
||||
if (text.length > MAX_EXTRACTED_CHARS) {
|
||||
const limitK = MAX_EXTRACTED_CHARS / 1000
|
||||
toast.error(
|
||||
`${file.name}: Content exceeds ${limitK}k character limit (${(text.length / 1000).toFixed(1)}k chars)`,
|
||||
)
|
||||
setPdfData((prev) => {
|
||||
const next = new Map(prev)
|
||||
next.delete(file)
|
||||
return next
|
||||
})
|
||||
// Remove the file from the list
|
||||
setFiles((prev) => prev.filter((f) => f !== file))
|
||||
continue
|
||||
}
|
||||
|
||||
setPdfData((prev) => {
|
||||
const next = new Map(prev)
|
||||
next.set(file, {
|
||||
text,
|
||||
charCount: text.length,
|
||||
isExtracting: false,
|
||||
})
|
||||
return next
|
||||
})
|
||||
} catch (error) {
|
||||
console.error("Failed to extract text:", error)
|
||||
toast.error(`Failed to read file: ${file.name}`)
|
||||
setPdfData((prev) => {
|
||||
const next = new Map(prev)
|
||||
next.delete(file)
|
||||
return next
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up pdfData for removed files
|
||||
setPdfData((prev) => {
|
||||
const next = new Map(prev)
|
||||
for (const key of prev.keys()) {
|
||||
if (!newFiles.includes(key)) {
|
||||
next.delete(key)
|
||||
}
|
||||
}
|
||||
return next
|
||||
})
|
||||
}
|
||||
|
||||
return {
|
||||
files,
|
||||
pdfData,
|
||||
handleFileChange,
|
||||
setFiles, // Export for external control (e.g., clearing files)
|
||||
}
|
||||
}
|
||||
@@ -1,247 +0,0 @@
|
||||
"use client"
|
||||
|
||||
import { useCallback, useMemo } from "react"
|
||||
import { toast } from "sonner"
|
||||
import { QuotaLimitToast } from "@/components/quota-limit-toast"
|
||||
import { STORAGE_KEYS } from "@/lib/storage"
|
||||
|
||||
export interface QuotaConfig {
|
||||
dailyRequestLimit: number
|
||||
dailyTokenLimit: number
|
||||
tpmLimit: number
|
||||
}
|
||||
|
||||
export interface QuotaCheckResult {
|
||||
allowed: boolean
|
||||
remaining: number
|
||||
used: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Hook for managing request/token quotas and rate limiting.
|
||||
* Handles three types of limits:
|
||||
* - Daily request limit
|
||||
* - Daily token limit
|
||||
* - Tokens per minute (TPM) rate limit
|
||||
*
|
||||
* Users with their own API key bypass all limits.
|
||||
*/
|
||||
export function useQuotaManager(config: QuotaConfig): {
|
||||
hasOwnApiKey: () => boolean
|
||||
checkDailyLimit: () => QuotaCheckResult
|
||||
checkTokenLimit: () => QuotaCheckResult
|
||||
checkTPMLimit: () => QuotaCheckResult
|
||||
incrementRequestCount: () => void
|
||||
incrementTokenCount: (tokens: number) => void
|
||||
incrementTPMCount: (tokens: number) => void
|
||||
showQuotaLimitToast: () => void
|
||||
showTokenLimitToast: (used: number) => void
|
||||
showTPMLimitToast: () => void
|
||||
} {
|
||||
const { dailyRequestLimit, dailyTokenLimit, tpmLimit } = config
|
||||
|
||||
// Check if user has their own API key configured (bypass limits)
|
||||
const hasOwnApiKey = useCallback((): boolean => {
|
||||
const provider = localStorage.getItem(STORAGE_KEYS.aiProvider)
|
||||
const apiKey = localStorage.getItem(STORAGE_KEYS.aiApiKey)
|
||||
return !!(provider && apiKey)
|
||||
}, [])
|
||||
|
||||
// Generic helper: Parse count from localStorage with NaN guard
|
||||
const parseStorageCount = (key: string): number => {
|
||||
const count = parseInt(localStorage.getItem(key) || "0", 10)
|
||||
return Number.isNaN(count) ? 0 : count
|
||||
}
|
||||
|
||||
// Generic helper: Create quota checker factory
|
||||
const createQuotaChecker = useCallback(
|
||||
(
|
||||
getTimeKey: () => string,
|
||||
timeStorageKey: string,
|
||||
countStorageKey: string,
|
||||
limit: number,
|
||||
) => {
|
||||
return (): QuotaCheckResult => {
|
||||
if (hasOwnApiKey())
|
||||
return { allowed: true, remaining: -1, used: 0 }
|
||||
if (limit <= 0) return { allowed: true, remaining: -1, used: 0 }
|
||||
|
||||
const currentTime = getTimeKey()
|
||||
const storedTime = localStorage.getItem(timeStorageKey)
|
||||
let count = parseStorageCount(countStorageKey)
|
||||
|
||||
if (storedTime !== currentTime) {
|
||||
count = 0
|
||||
localStorage.setItem(timeStorageKey, currentTime)
|
||||
localStorage.setItem(countStorageKey, "0")
|
||||
}
|
||||
|
||||
return {
|
||||
allowed: count < limit,
|
||||
remaining: limit - count,
|
||||
used: count,
|
||||
}
|
||||
}
|
||||
},
|
||||
[hasOwnApiKey],
|
||||
)
|
||||
|
||||
// Generic helper: Create quota incrementer factory
|
||||
const createQuotaIncrementer = useCallback(
|
||||
(
|
||||
getTimeKey: () => string,
|
||||
timeStorageKey: string,
|
||||
countStorageKey: string,
|
||||
validateInput: boolean = false,
|
||||
) => {
|
||||
return (tokens: number = 1): void => {
|
||||
if (validateInput && (!Number.isFinite(tokens) || tokens <= 0))
|
||||
return
|
||||
|
||||
const currentTime = getTimeKey()
|
||||
const storedTime = localStorage.getItem(timeStorageKey)
|
||||
let count = parseStorageCount(countStorageKey)
|
||||
|
||||
if (storedTime !== currentTime) {
|
||||
count = 0
|
||||
localStorage.setItem(timeStorageKey, currentTime)
|
||||
}
|
||||
|
||||
localStorage.setItem(countStorageKey, String(count + tokens))
|
||||
}
|
||||
},
|
||||
[],
|
||||
)
|
||||
|
||||
// Check daily request limit
|
||||
const checkDailyLimit = useMemo(
|
||||
() =>
|
||||
createQuotaChecker(
|
||||
() => new Date().toDateString(),
|
||||
STORAGE_KEYS.requestDate,
|
||||
STORAGE_KEYS.requestCount,
|
||||
dailyRequestLimit,
|
||||
),
|
||||
[createQuotaChecker, dailyRequestLimit],
|
||||
)
|
||||
|
||||
// Increment request count
|
||||
const incrementRequestCount = useMemo(
|
||||
() =>
|
||||
createQuotaIncrementer(
|
||||
() => new Date().toDateString(),
|
||||
STORAGE_KEYS.requestDate,
|
||||
STORAGE_KEYS.requestCount,
|
||||
false,
|
||||
),
|
||||
[createQuotaIncrementer],
|
||||
)
|
||||
|
||||
// Show quota limit toast (request-based)
|
||||
const showQuotaLimitToast = useCallback(() => {
|
||||
toast.custom(
|
||||
(t) => (
|
||||
<QuotaLimitToast
|
||||
used={dailyRequestLimit}
|
||||
limit={dailyRequestLimit}
|
||||
onDismiss={() => toast.dismiss(t)}
|
||||
/>
|
||||
),
|
||||
{ duration: 15000 },
|
||||
)
|
||||
}, [dailyRequestLimit])
|
||||
|
||||
// Check daily token limit
|
||||
const checkTokenLimit = useMemo(
|
||||
() =>
|
||||
createQuotaChecker(
|
||||
() => new Date().toDateString(),
|
||||
STORAGE_KEYS.tokenDate,
|
||||
STORAGE_KEYS.tokenCount,
|
||||
dailyTokenLimit,
|
||||
),
|
||||
[createQuotaChecker, dailyTokenLimit],
|
||||
)
|
||||
|
||||
// Increment token count
|
||||
const incrementTokenCount = useMemo(
|
||||
() =>
|
||||
createQuotaIncrementer(
|
||||
() => new Date().toDateString(),
|
||||
STORAGE_KEYS.tokenDate,
|
||||
STORAGE_KEYS.tokenCount,
|
||||
true, // Validate input tokens
|
||||
),
|
||||
[createQuotaIncrementer],
|
||||
)
|
||||
|
||||
// Show token limit toast
|
||||
const showTokenLimitToast = useCallback(
|
||||
(used: number) => {
|
||||
toast.custom(
|
||||
(t) => (
|
||||
<QuotaLimitToast
|
||||
type="token"
|
||||
used={used}
|
||||
limit={dailyTokenLimit}
|
||||
onDismiss={() => toast.dismiss(t)}
|
||||
/>
|
||||
),
|
||||
{ duration: 15000 },
|
||||
)
|
||||
},
|
||||
[dailyTokenLimit],
|
||||
)
|
||||
|
||||
// Check TPM (tokens per minute) limit
|
||||
const checkTPMLimit = useMemo(
|
||||
() =>
|
||||
createQuotaChecker(
|
||||
() => Math.floor(Date.now() / 60000).toString(),
|
||||
STORAGE_KEYS.tpmMinute,
|
||||
STORAGE_KEYS.tpmCount,
|
||||
tpmLimit,
|
||||
),
|
||||
[createQuotaChecker, tpmLimit],
|
||||
)
|
||||
|
||||
// Increment TPM count
|
||||
const incrementTPMCount = useMemo(
|
||||
() =>
|
||||
createQuotaIncrementer(
|
||||
() => Math.floor(Date.now() / 60000).toString(),
|
||||
STORAGE_KEYS.tpmMinute,
|
||||
STORAGE_KEYS.tpmCount,
|
||||
true, // Validate input tokens
|
||||
),
|
||||
[createQuotaIncrementer],
|
||||
)
|
||||
|
||||
// Show TPM limit toast
|
||||
const showTPMLimitToast = useCallback(() => {
|
||||
const limitDisplay =
|
||||
tpmLimit >= 1000 ? `${tpmLimit / 1000}k` : String(tpmLimit)
|
||||
toast.error(
|
||||
`Rate limit reached (${limitDisplay} tokens/min). Please wait 60 seconds before sending another request.`,
|
||||
{ duration: 8000 },
|
||||
)
|
||||
}, [tpmLimit])
|
||||
|
||||
return {
|
||||
// Check functions
|
||||
hasOwnApiKey,
|
||||
checkDailyLimit,
|
||||
checkTokenLimit,
|
||||
checkTPMLimit,
|
||||
|
||||
// Increment functions
|
||||
incrementRequestCount,
|
||||
incrementTokenCount,
|
||||
incrementTPMCount,
|
||||
|
||||
// Toast functions
|
||||
showQuotaLimitToast,
|
||||
showTokenLimitToast,
|
||||
showTPMLimitToast,
|
||||
}
|
||||
}
|
||||
933
lib/utils.ts
933
lib/utils.ts
@@ -6,95 +6,6 @@ export function cn(...inputs: ClassValue[]) {
|
||||
return twMerge(clsx(inputs))
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// XML Validation/Fix Constants
|
||||
// ============================================================================
|
||||
|
||||
/** Maximum XML size to process (1MB) - larger XMLs may cause performance issues */
|
||||
const MAX_XML_SIZE = 1_000_000
|
||||
|
||||
/** Maximum iterations for aggressive cell dropping to prevent infinite loops */
|
||||
const MAX_DROP_ITERATIONS = 10
|
||||
|
||||
/** Structural attributes that should not be duplicated in draw.io */
|
||||
const STRUCTURAL_ATTRS = [
|
||||
"edge",
|
||||
"parent",
|
||||
"source",
|
||||
"target",
|
||||
"vertex",
|
||||
"connectable",
|
||||
]
|
||||
|
||||
/** Valid XML entity names */
|
||||
const VALID_ENTITIES = new Set(["lt", "gt", "amp", "quot", "apos"])
|
||||
|
||||
// ============================================================================
|
||||
// XML Parsing Helpers
|
||||
// ============================================================================
|
||||
|
||||
interface ParsedTag {
|
||||
tag: string
|
||||
tagName: string
|
||||
isClosing: boolean
|
||||
isSelfClosing: boolean
|
||||
startIndex: number
|
||||
endIndex: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse XML tags while properly handling quoted strings
|
||||
* This is a shared utility used by both validation and fixing logic
|
||||
*/
|
||||
function parseXmlTags(xml: string): ParsedTag[] {
|
||||
const tags: ParsedTag[] = []
|
||||
let i = 0
|
||||
|
||||
while (i < xml.length) {
|
||||
const tagStart = xml.indexOf("<", i)
|
||||
if (tagStart === -1) break
|
||||
|
||||
// Find matching > by tracking quotes
|
||||
let tagEnd = tagStart + 1
|
||||
let inQuote = false
|
||||
let quoteChar = ""
|
||||
|
||||
while (tagEnd < xml.length) {
|
||||
const c = xml[tagEnd]
|
||||
if (inQuote) {
|
||||
if (c === quoteChar) inQuote = false
|
||||
} else {
|
||||
if (c === '"' || c === "'") {
|
||||
inQuote = true
|
||||
quoteChar = c
|
||||
} else if (c === ">") {
|
||||
break
|
||||
}
|
||||
}
|
||||
tagEnd++
|
||||
}
|
||||
|
||||
if (tagEnd >= xml.length) break
|
||||
|
||||
const tag = xml.substring(tagStart, tagEnd + 1)
|
||||
i = tagEnd + 1
|
||||
|
||||
const tagMatch = /^<(\/?)([a-zA-Z][a-zA-Z0-9:_-]*)/.exec(tag)
|
||||
if (!tagMatch) continue
|
||||
|
||||
tags.push({
|
||||
tag,
|
||||
tagName: tagMatch[2],
|
||||
isClosing: tagMatch[1] === "/",
|
||||
isSelfClosing: tag.endsWith("/>"),
|
||||
startIndex: tagStart,
|
||||
endIndex: tagEnd,
|
||||
})
|
||||
}
|
||||
|
||||
return tags
|
||||
}
|
||||
|
||||
/**
|
||||
* Format XML string with proper indentation and line breaks
|
||||
* @param xml - The XML string to format
|
||||
@@ -195,32 +106,6 @@ export function convertToLegalXml(xmlString: string): string {
|
||||
return result
|
||||
}
|
||||
|
||||
/**
|
||||
* Wrap XML content with the full mxfile structure required by draw.io.
|
||||
* Handles cases where XML is just <root>, <mxGraphModel>, or already has <mxfile>.
|
||||
* @param xml - The XML string (may be partial or complete)
|
||||
* @returns Full mxfile-wrapped XML string
|
||||
*/
|
||||
export function wrapWithMxFile(xml: string): string {
|
||||
if (!xml) {
|
||||
return `<mxfile><diagram name="Page-1" id="page-1"><mxGraphModel><root><mxCell id="0"/><mxCell id="1" parent="0"/></root></mxGraphModel></diagram></mxfile>`
|
||||
}
|
||||
|
||||
// Already has full structure
|
||||
if (xml.includes("<mxfile")) {
|
||||
return xml
|
||||
}
|
||||
|
||||
// Has mxGraphModel but not mxfile
|
||||
if (xml.includes("<mxGraphModel")) {
|
||||
return `<mxfile><diagram name="Page-1" id="page-1">${xml}</diagram></mxfile>`
|
||||
}
|
||||
|
||||
// Just <root> content - extract inner content and wrap fully
|
||||
const rootContent = xml.replace(/<\/?root>/g, "").trim()
|
||||
return `<mxfile><diagram name="Page-1" id="page-1"><mxGraphModel><root>${rootContent}</root></mxGraphModel></diagram></mxfile>`
|
||||
}
|
||||
|
||||
/**
|
||||
* Replace nodes in a Draw.io XML diagram
|
||||
* @param currentXML - The original Draw.io XML string
|
||||
@@ -622,733 +507,143 @@ export function replaceXMLParts(
|
||||
return result
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Validation Helper Functions
|
||||
// ============================================================================
|
||||
|
||||
/** Check for duplicate structural attributes in a tag */
|
||||
function checkDuplicateAttributes(xml: string): string | null {
|
||||
const structuralSet = new Set(STRUCTURAL_ATTRS)
|
||||
const tagPattern = /<[^>]+>/g
|
||||
let tagMatch
|
||||
while ((tagMatch = tagPattern.exec(xml)) !== null) {
|
||||
const tag = tagMatch[0]
|
||||
const attrPattern = /\s([a-zA-Z_:][a-zA-Z0-9_:.-]*)\s*=/g
|
||||
const attributes = new Map<string, number>()
|
||||
let attrMatch
|
||||
while ((attrMatch = attrPattern.exec(tag)) !== null) {
|
||||
const attrName = attrMatch[1]
|
||||
attributes.set(attrName, (attributes.get(attrName) || 0) + 1)
|
||||
}
|
||||
const duplicates = Array.from(attributes.entries())
|
||||
.filter(([name, count]) => count > 1 && structuralSet.has(name))
|
||||
.map(([name]) => name)
|
||||
if (duplicates.length > 0) {
|
||||
return `Invalid XML: Duplicate structural attribute(s): ${duplicates.join(", ")}. Remove duplicate attributes.`
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
/** Check for duplicate IDs in XML */
|
||||
function checkDuplicateIds(xml: string): string | null {
|
||||
const idPattern = /\bid\s*=\s*["']([^"']+)["']/gi
|
||||
const ids = new Map<string, number>()
|
||||
let idMatch
|
||||
while ((idMatch = idPattern.exec(xml)) !== null) {
|
||||
const id = idMatch[1]
|
||||
ids.set(id, (ids.get(id) || 0) + 1)
|
||||
}
|
||||
const duplicateIds = Array.from(ids.entries())
|
||||
.filter(([, count]) => count > 1)
|
||||
.map(([id, count]) => `'${id}' (${count}x)`)
|
||||
if (duplicateIds.length > 0) {
|
||||
return `Invalid XML: Found duplicate ID(s): ${duplicateIds.slice(0, 3).join(", ")}. All id attributes must be unique.`
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
/** Check for tag mismatches using parsed tags */
|
||||
function checkTagMismatches(xml: string): string | null {
|
||||
const xmlWithoutComments = xml.replace(/<!--[\s\S]*?-->/g, "")
|
||||
const tags = parseXmlTags(xmlWithoutComments)
|
||||
const tagStack: string[] = []
|
||||
|
||||
for (const { tagName, isClosing, isSelfClosing } of tags) {
|
||||
if (isClosing) {
|
||||
if (tagStack.length === 0) {
|
||||
return `Invalid XML: Closing tag </${tagName}> without matching opening tag`
|
||||
}
|
||||
const expected = tagStack.pop()
|
||||
if (expected?.toLowerCase() !== tagName.toLowerCase()) {
|
||||
return `Invalid XML: Expected closing tag </${expected}> but found </${tagName}>`
|
||||
}
|
||||
} else if (!isSelfClosing) {
|
||||
tagStack.push(tagName)
|
||||
}
|
||||
}
|
||||
if (tagStack.length > 0) {
|
||||
return `Invalid XML: Document has ${tagStack.length} unclosed tag(s): ${tagStack.join(", ")}`
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
/** Check for invalid character references */
|
||||
function checkCharacterReferences(xml: string): string | null {
|
||||
const charRefPattern = /&#x?[^;]+;?/g
|
||||
let charMatch
|
||||
while ((charMatch = charRefPattern.exec(xml)) !== null) {
|
||||
const ref = charMatch[0]
|
||||
if (ref.startsWith("&#x")) {
|
||||
if (!ref.endsWith(";")) {
|
||||
return `Invalid XML: Missing semicolon after hex reference: ${ref}`
|
||||
}
|
||||
const hexDigits = ref.substring(3, ref.length - 1)
|
||||
if (hexDigits.length === 0 || !/^[0-9a-fA-F]+$/.test(hexDigits)) {
|
||||
return `Invalid XML: Invalid hex character reference: ${ref}`
|
||||
}
|
||||
} else if (ref.startsWith("&#")) {
|
||||
if (!ref.endsWith(";")) {
|
||||
return `Invalid XML: Missing semicolon after decimal reference: ${ref}`
|
||||
}
|
||||
const decDigits = ref.substring(2, ref.length - 1)
|
||||
if (decDigits.length === 0 || !/^[0-9]+$/.test(decDigits)) {
|
||||
return `Invalid XML: Invalid decimal character reference: ${ref}`
|
||||
}
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
/** Check for invalid entity references */
|
||||
function checkEntityReferences(xml: string): string | null {
|
||||
const xmlWithoutComments = xml.replace(/<!--[\s\S]*?-->/g, "")
|
||||
const bareAmpPattern = /&(?!(?:lt|gt|amp|quot|apos|#))/g
|
||||
if (bareAmpPattern.test(xmlWithoutComments)) {
|
||||
return "Invalid XML: Found unescaped & character(s). Replace & with &"
|
||||
}
|
||||
const invalidEntityPattern = /&([a-zA-Z][a-zA-Z0-9]*);/g
|
||||
let entityMatch
|
||||
while (
|
||||
(entityMatch = invalidEntityPattern.exec(xmlWithoutComments)) !== null
|
||||
) {
|
||||
if (!VALID_ENTITIES.has(entityMatch[1])) {
|
||||
return `Invalid XML: Invalid entity reference: &${entityMatch[1]}; - use only valid XML entities (lt, gt, amp, quot, apos)`
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
/** Check for nested mxCell tags using regex */
|
||||
function checkNestedMxCells(xml: string): string | null {
|
||||
const cellTagPattern = /<\/?mxCell[^>]*>/g
|
||||
const cellStack: number[] = []
|
||||
let cellMatch
|
||||
while ((cellMatch = cellTagPattern.exec(xml)) !== null) {
|
||||
const tag = cellMatch[0]
|
||||
if (tag.startsWith("</mxCell>")) {
|
||||
if (cellStack.length > 0) cellStack.pop()
|
||||
} else if (!tag.endsWith("/>")) {
|
||||
const isLabelOrGeometry =
|
||||
/\sas\s*=\s*["'](valueLabel|geometry)["']/.test(tag)
|
||||
if (!isLabelOrGeometry) {
|
||||
cellStack.push(cellMatch.index)
|
||||
if (cellStack.length > 1) {
|
||||
return "Invalid XML: Found nested mxCell tags. Cells should be siblings, not nested inside other mxCell elements."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates draw.io XML structure for common issues
|
||||
* Uses DOM parsing + additional regex checks for high accuracy
|
||||
* @param xml - The XML string to validate
|
||||
* @returns null if valid, error message string if invalid
|
||||
*/
|
||||
export function validateMxCellStructure(xml: string): string | null {
|
||||
// Size check for performance
|
||||
if (xml.length > MAX_XML_SIZE) {
|
||||
console.warn(
|
||||
`[validateMxCellStructure] XML size (${xml.length}) exceeds ${MAX_XML_SIZE} bytes, may cause performance issues`,
|
||||
)
|
||||
const parser = new DOMParser()
|
||||
const doc = parser.parseFromString(xml, "text/xml")
|
||||
|
||||
// Check for XML parsing errors (includes unescaped special characters)
|
||||
const parseError = doc.querySelector("parsererror")
|
||||
if (parseError) {
|
||||
return `Invalid XML: The XML contains syntax errors (likely unescaped special characters like <, >, & in attribute values). Please escape special characters: use < for <, > for >, & for &, " for ". Regenerate the diagram with properly escaped values.`
|
||||
}
|
||||
|
||||
// 0. First use DOM parser to catch syntax errors (most accurate)
|
||||
try {
|
||||
const parser = new DOMParser()
|
||||
const doc = parser.parseFromString(xml, "text/xml")
|
||||
const parseError = doc.querySelector("parsererror")
|
||||
if (parseError) {
|
||||
return `Invalid XML: The XML contains syntax errors (likely unescaped special characters like <, >, & in attribute values). Please escape special characters: use < for <, > for >, & for &, " for ". Regenerate the diagram with properly escaped values.`
|
||||
}
|
||||
// Get all mxCell elements once for all validations
|
||||
const allCells = doc.querySelectorAll("mxCell")
|
||||
|
||||
// DOM-based checks for nested mxCell
|
||||
const allCells = doc.querySelectorAll("mxCell")
|
||||
for (const cell of allCells) {
|
||||
if (cell.parentElement?.tagName === "mxCell") {
|
||||
const id = cell.getAttribute("id") || "unknown"
|
||||
return `Invalid XML: Found nested mxCell (id="${id}"). Cells should be siblings, not nested inside other mxCell elements.`
|
||||
// Single pass: collect IDs, check for duplicates, nesting, orphans, and invalid parents
|
||||
const cellIds = new Set<string>()
|
||||
const duplicateIds: string[] = []
|
||||
const nestedCells: string[] = []
|
||||
const orphanCells: string[] = []
|
||||
const invalidParents: { id: string; parent: string }[] = []
|
||||
const edgesToValidate: {
|
||||
id: string
|
||||
source: string | null
|
||||
target: string | null
|
||||
}[] = []
|
||||
|
||||
allCells.forEach((cell) => {
|
||||
const id = cell.getAttribute("id")
|
||||
const parent = cell.getAttribute("parent")
|
||||
const isEdge = cell.getAttribute("edge") === "1"
|
||||
|
||||
// Check for duplicate IDs
|
||||
if (id) {
|
||||
if (cellIds.has(id)) {
|
||||
duplicateIds.push(id)
|
||||
} else {
|
||||
cellIds.add(id)
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// Log unexpected DOMParser errors before falling back to regex checks
|
||||
console.warn(
|
||||
"[validateMxCellStructure] DOMParser threw unexpected error, falling back to regex validation:",
|
||||
error,
|
||||
)
|
||||
}
|
||||
|
||||
// 1. Check for CDATA wrapper (invalid at document root)
|
||||
if (/^\s*<!\[CDATA\[/.test(xml)) {
|
||||
return "Invalid XML: XML is wrapped in CDATA section - remove <![CDATA[ from start and ]]> from end"
|
||||
}
|
||||
|
||||
// 2. Check for duplicate structural attributes
|
||||
const dupAttrError = checkDuplicateAttributes(xml)
|
||||
if (dupAttrError) return dupAttrError
|
||||
|
||||
// 3. Check for unescaped < in attribute values
|
||||
const attrValuePattern = /=\s*"([^"]*)"/g
|
||||
let attrValMatch
|
||||
while ((attrValMatch = attrValuePattern.exec(xml)) !== null) {
|
||||
const value = attrValMatch[1]
|
||||
if (/</.test(value) && !/</.test(value)) {
|
||||
return "Invalid XML: Unescaped < character in attribute values. Replace < with <"
|
||||
// Check for nested mxCell (parent element is also mxCell)
|
||||
if (cell.parentElement?.tagName === "mxCell") {
|
||||
nestedCells.push(id || "unknown")
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Check for duplicate IDs
|
||||
const dupIdError = checkDuplicateIds(xml)
|
||||
if (dupIdError) return dupIdError
|
||||
|
||||
// 5. Check for tag mismatches
|
||||
const tagMismatchError = checkTagMismatches(xml)
|
||||
if (tagMismatchError) return tagMismatchError
|
||||
|
||||
// 6. Check invalid character references
|
||||
const charRefError = checkCharacterReferences(xml)
|
||||
if (charRefError) return charRefError
|
||||
|
||||
// 7. Check for invalid comment syntax (-- inside comments)
|
||||
const commentPattern = /<!--([\s\S]*?)-->/g
|
||||
let commentMatch
|
||||
while ((commentMatch = commentPattern.exec(xml)) !== null) {
|
||||
if (/--/.test(commentMatch[1])) {
|
||||
return "Invalid XML: Comment contains -- (double hyphen) which is not allowed"
|
||||
// Check parent attribute (skip root cell id="0")
|
||||
if (id !== "0") {
|
||||
if (!parent) {
|
||||
if (id) orphanCells.push(id)
|
||||
} else {
|
||||
// Store for later validation (after all IDs collected)
|
||||
invalidParents.push({ id: id || "unknown", parent })
|
||||
}
|
||||
}
|
||||
|
||||
// Collect edges for connection validation
|
||||
if (isEdge) {
|
||||
edgesToValidate.push({
|
||||
id: id || "unknown",
|
||||
source: cell.getAttribute("source"),
|
||||
target: cell.getAttribute("target"),
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
// Return errors in priority order
|
||||
if (nestedCells.length > 0) {
|
||||
return `Invalid XML: Found nested mxCell elements (IDs: ${nestedCells.slice(0, 3).join(", ")}). All mxCell elements must be direct children of <root>, never nested inside other mxCell elements. Please regenerate the diagram with correct structure.`
|
||||
}
|
||||
|
||||
// 8. Check for unescaped entity references and invalid entity names
|
||||
const entityError = checkEntityReferences(xml)
|
||||
if (entityError) return entityError
|
||||
|
||||
// 9. Check for empty id attributes on mxCell
|
||||
if (/<mxCell[^>]*\sid\s*=\s*["']\s*["'][^>]*>/g.test(xml)) {
|
||||
return "Invalid XML: Found mxCell element(s) with empty id attribute"
|
||||
if (duplicateIds.length > 0) {
|
||||
return `Invalid XML: Found duplicate cell IDs (${duplicateIds.slice(0, 3).join(", ")}). Each mxCell must have a unique ID. Please regenerate the diagram with unique IDs for all elements.`
|
||||
}
|
||||
|
||||
// 10. Check for nested mxCell tags
|
||||
const nestedCellError = checkNestedMxCells(xml)
|
||||
if (nestedCellError) return nestedCellError
|
||||
if (orphanCells.length > 0) {
|
||||
return `Invalid XML: Found cells without parent attribute (IDs: ${orphanCells.slice(0, 3).join(", ")}). All mxCell elements (except id="0") must have a parent attribute. Please regenerate the diagram with proper parent references.`
|
||||
}
|
||||
|
||||
// Validate parent references (now that all IDs are collected)
|
||||
const badParents = invalidParents.filter((p) => !cellIds.has(p.parent))
|
||||
if (badParents.length > 0) {
|
||||
const details = badParents
|
||||
.slice(0, 3)
|
||||
.map((p) => `${p.id} (parent: ${p.parent})`)
|
||||
.join(", ")
|
||||
return `Invalid XML: Found cells with invalid parent references (${details}). Parent IDs must reference existing cells. Please regenerate the diagram with valid parent references.`
|
||||
}
|
||||
|
||||
// Validate edge connections
|
||||
const invalidConnections: string[] = []
|
||||
edgesToValidate.forEach((edge) => {
|
||||
if (edge.source && !cellIds.has(edge.source)) {
|
||||
invalidConnections.push(`${edge.id} (source: ${edge.source})`)
|
||||
}
|
||||
if (edge.target && !cellIds.has(edge.target)) {
|
||||
invalidConnections.push(`${edge.id} (target: ${edge.target})`)
|
||||
}
|
||||
})
|
||||
|
||||
if (invalidConnections.length > 0) {
|
||||
return `Invalid XML: Found edges with invalid source/target references (${invalidConnections.slice(0, 3).join(", ")}). Edge source and target must reference existing cell IDs. Please regenerate the diagram with valid edge connections.`
|
||||
}
|
||||
|
||||
// Check for orphaned mxPoint elements (not inside <Array as="points"> and without 'as' attribute)
|
||||
// These cause "Could not add object mxPoint" errors in draw.io
|
||||
const allMxPoints = doc.querySelectorAll("mxPoint")
|
||||
const orphanedMxPoints: string[] = []
|
||||
allMxPoints.forEach((point) => {
|
||||
const hasAsAttr = point.hasAttribute("as")
|
||||
const parentIsArray =
|
||||
point.parentElement?.tagName === "Array" &&
|
||||
point.parentElement?.getAttribute("as") === "points"
|
||||
|
||||
if (!hasAsAttr && !parentIsArray) {
|
||||
// Find the parent mxCell to report which edge has the problem
|
||||
let parent = point.parentElement
|
||||
while (parent && parent.tagName !== "mxCell") {
|
||||
parent = parent.parentElement
|
||||
}
|
||||
const cellId = parent?.getAttribute("id") || "unknown"
|
||||
if (!orphanedMxPoints.includes(cellId)) {
|
||||
orphanedMxPoints.push(cellId)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
if (orphanedMxPoints.length > 0) {
|
||||
return `Invalid XML: Found orphaned mxPoint elements in cells (${orphanedMxPoints.slice(0, 3).join(", ")}). mxPoint elements must either have an 'as' attribute (e.g., as="sourcePoint") or be inside <Array as="points">. For edge waypoints, use: <Array as="points"><mxPoint x="..." y="..."/></Array>. Please fix the mxPoint structure.`
|
||||
}
|
||||
|
||||
return null
|
||||
}
|
||||
|
||||
/**
|
||||
* Attempts to auto-fix common XML issues in draw.io diagrams
|
||||
* @param xml - The XML string to fix
|
||||
* @returns Object with fixed XML and list of fixes applied
|
||||
*/
|
||||
export function autoFixXml(xml: string): { fixed: string; fixes: string[] } {
|
||||
let fixed = xml
|
||||
const fixes: string[] = []
|
||||
|
||||
// 0. Fix JSON-escaped XML (common when XML is stored in JSON without unescaping)
|
||||
// Only apply when we see JSON-escaped attribute patterns like =\"value\"
|
||||
// Don't apply to legitimate \n in value attributes (draw.io uses these for line breaks)
|
||||
if (/=\\"/.test(fixed)) {
|
||||
// Replace literal \" with actual quotes
|
||||
fixed = fixed.replace(/\\"/g, '"')
|
||||
// Replace literal \n with actual newlines (only after confirming JSON-escaped)
|
||||
fixed = fixed.replace(/\\n/g, "\n")
|
||||
fixes.push("Fixed JSON-escaped XML")
|
||||
}
|
||||
|
||||
// 1. Remove CDATA wrapper (MUST be before text-before-root check)
|
||||
if (/^\s*<!\[CDATA\[/.test(fixed)) {
|
||||
fixed = fixed.replace(/^\s*<!\[CDATA\[/, "").replace(/\]\]>\s*$/, "")
|
||||
fixes.push("Removed CDATA wrapper")
|
||||
}
|
||||
|
||||
// 2. Remove text before XML declaration or root element (only if it's garbage text, not valid XML)
|
||||
const xmlStart = fixed.search(/<(\?xml|mxGraphModel|mxfile)/i)
|
||||
if (xmlStart > 0 && !/^<[a-zA-Z]/.test(fixed.trim())) {
|
||||
fixed = fixed.substring(xmlStart)
|
||||
fixes.push("Removed text before XML root")
|
||||
}
|
||||
|
||||
// 2. Fix duplicate attributes (keep first occurrence, remove duplicates)
|
||||
let dupAttrFixed = false
|
||||
fixed = fixed.replace(/<[^>]+>/g, (tag) => {
|
||||
let newTag = tag
|
||||
|
||||
for (const attr of STRUCTURAL_ATTRS) {
|
||||
// Find all occurrences of this attribute
|
||||
const attrRegex = new RegExp(
|
||||
`\\s${attr}\\s*=\\s*["'][^"']*["']`,
|
||||
"gi",
|
||||
)
|
||||
const matches = tag.match(attrRegex)
|
||||
|
||||
if (matches && matches.length > 1) {
|
||||
// Keep first, remove others
|
||||
let firstKept = false
|
||||
newTag = newTag.replace(attrRegex, (m) => {
|
||||
if (!firstKept) {
|
||||
firstKept = true
|
||||
return m
|
||||
}
|
||||
dupAttrFixed = true
|
||||
return ""
|
||||
})
|
||||
}
|
||||
}
|
||||
return newTag
|
||||
})
|
||||
if (dupAttrFixed) {
|
||||
fixes.push("Removed duplicate structural attributes")
|
||||
}
|
||||
|
||||
// 3. Fix unescaped & characters (but not valid entities)
|
||||
// Match & not followed by valid entity pattern
|
||||
const ampersandPattern =
|
||||
/&(?!(?:lt|gt|amp|quot|apos|#[0-9]+|#x[0-9a-fA-F]+);)/g
|
||||
if (ampersandPattern.test(fixed)) {
|
||||
fixed = fixed.replace(
|
||||
/&(?!(?:lt|gt|amp|quot|apos|#[0-9]+|#x[0-9a-fA-F]+);)/g,
|
||||
"&",
|
||||
)
|
||||
fixes.push("Escaped unescaped & characters")
|
||||
}
|
||||
|
||||
// 3. Fix invalid entity names like &quot; -> "
|
||||
// Common mistake: double-escaping
|
||||
const invalidEntities = [
|
||||
{ pattern: /&quot;/g, replacement: """, name: "&quot;" },
|
||||
{ pattern: /&lt;/g, replacement: "<", name: "&lt;" },
|
||||
{ pattern: /&gt;/g, replacement: ">", name: "&gt;" },
|
||||
{ pattern: /&apos;/g, replacement: "'", name: "&apos;" },
|
||||
{ pattern: /&amp;/g, replacement: "&", name: "&amp;" },
|
||||
]
|
||||
for (const { pattern, replacement, name } of invalidEntities) {
|
||||
if (pattern.test(fixed)) {
|
||||
fixed = fixed.replace(pattern, replacement)
|
||||
fixes.push(`Fixed double-escaped entity ${name}`)
|
||||
}
|
||||
}
|
||||
|
||||
// 3b. Fix malformed attribute values where " is used as delimiter instead of actual quotes
|
||||
// Pattern: attr="value" should become attr="value" (the " was meant to be the quote delimiter)
|
||||
// This commonly happens with dashPattern="1 1;"
|
||||
const malformedQuotePattern = /(\s[a-zA-Z][a-zA-Z0-9_:-]*)="/
|
||||
if (malformedQuotePattern.test(fixed)) {
|
||||
// Replace =" with =" and trailing " before next attribute or tag end with "
|
||||
fixed = fixed.replace(
|
||||
/(\s[a-zA-Z][a-zA-Z0-9_:-]*)="([^&]*?)"/g,
|
||||
'$1="$2"',
|
||||
)
|
||||
fixes.push(
|
||||
'Fixed malformed attribute quotes (="..." to ="...")',
|
||||
)
|
||||
}
|
||||
|
||||
// 3c. Fix malformed closing tags like </tag/> -> </tag>
|
||||
const malformedClosingTag = /<\/([a-zA-Z][a-zA-Z0-9]*)\s*\/>/g
|
||||
if (malformedClosingTag.test(fixed)) {
|
||||
fixed = fixed.replace(/<\/([a-zA-Z][a-zA-Z0-9]*)\s*\/>/g, "</$1>")
|
||||
fixes.push("Fixed malformed closing tags (</tag/> to </tag>)")
|
||||
}
|
||||
|
||||
// 3d. Fix missing space between attributes like vertex="1"parent="1"
|
||||
const missingSpacePattern = /("[^"]*")([a-zA-Z][a-zA-Z0-9_:-]*=)/g
|
||||
if (missingSpacePattern.test(fixed)) {
|
||||
fixed = fixed.replace(/("[^"]*")([a-zA-Z][a-zA-Z0-9_:-]*=)/g, "$1 $2")
|
||||
fixes.push("Added missing space between attributes")
|
||||
}
|
||||
|
||||
// 3e. Fix unescaped quotes in style color values like fillColor="#fff2e6"
|
||||
// The " after Color= prematurely ends the style attribute. Remove it.
|
||||
// Pattern: ;fillColor="#fff → ;fillColor=#fff (remove first ", keep second as style closer)
|
||||
const quotedColorPattern = /;([a-zA-Z]*[Cc]olor)="#/
|
||||
if (quotedColorPattern.test(fixed)) {
|
||||
fixed = fixed.replace(/;([a-zA-Z]*[Cc]olor)="#/g, ";$1=#")
|
||||
fixes.push("Removed quotes around color values in style")
|
||||
}
|
||||
|
||||
// 4. Fix unescaped < in attribute values
|
||||
// This is tricky - we need to find < inside quoted attribute values
|
||||
const attrPattern = /(=\s*")([^"]*?)(<)([^"]*?)(")/g
|
||||
let attrMatch
|
||||
let hasUnescapedLt = false
|
||||
while ((attrMatch = attrPattern.exec(fixed)) !== null) {
|
||||
if (!attrMatch[3].startsWith("<")) {
|
||||
hasUnescapedLt = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if (hasUnescapedLt) {
|
||||
// Replace < with < inside attribute values
|
||||
fixed = fixed.replace(/=\s*"([^"]*)"/g, (_match, value) => {
|
||||
const escaped = value.replace(/</g, "<")
|
||||
return `="${escaped}"`
|
||||
})
|
||||
fixes.push("Escaped < characters in attribute values")
|
||||
}
|
||||
|
||||
// 5. Fix invalid character references (remove malformed ones)
|
||||
// Pattern: &#x followed by non-hex chars before ;
|
||||
const invalidHexRefs: string[] = []
|
||||
fixed = fixed.replace(/&#x([^;]*);/g, (match, hex) => {
|
||||
if (/^[0-9a-fA-F]+$/.test(hex) && hex.length > 0) {
|
||||
return match // Valid hex ref, keep it
|
||||
}
|
||||
invalidHexRefs.push(match)
|
||||
return "" // Remove invalid ref
|
||||
})
|
||||
if (invalidHexRefs.length > 0) {
|
||||
fixes.push(
|
||||
`Removed ${invalidHexRefs.length} invalid hex character reference(s)`,
|
||||
)
|
||||
}
|
||||
|
||||
// 6. Fix invalid decimal character references
|
||||
const invalidDecRefs: string[] = []
|
||||
fixed = fixed.replace(/&#([^x][^;]*);/g, (match, dec) => {
|
||||
if (/^[0-9]+$/.test(dec) && dec.length > 0) {
|
||||
return match // Valid decimal ref, keep it
|
||||
}
|
||||
invalidDecRefs.push(match)
|
||||
return "" // Remove invalid ref
|
||||
})
|
||||
if (invalidDecRefs.length > 0) {
|
||||
fixes.push(
|
||||
`Removed ${invalidDecRefs.length} invalid decimal character reference(s)`,
|
||||
)
|
||||
}
|
||||
|
||||
// 7. Fix invalid comment syntax (replace -- with - repeatedly until none left)
|
||||
fixed = fixed.replace(/<!--([\s\S]*?)-->/g, (match, content) => {
|
||||
if (/--/.test(content)) {
|
||||
// Keep replacing until no double hyphens remain
|
||||
let fixedContent = content
|
||||
while (/--/.test(fixedContent)) {
|
||||
fixedContent = fixedContent.replace(/--/g, "-")
|
||||
}
|
||||
fixes.push("Fixed invalid comment syntax (removed double hyphens)")
|
||||
return `<!--${fixedContent}-->`
|
||||
}
|
||||
return match
|
||||
})
|
||||
|
||||
// 8. Fix <Cell> tags that should be <mxCell> (common LLM mistake)
|
||||
// This handles both opening and closing tags
|
||||
const hasCellTags = /<\/?Cell[\s>]/i.test(fixed)
|
||||
if (hasCellTags) {
|
||||
fixed = fixed.replace(/<Cell(\s)/gi, "<mxCell$1")
|
||||
fixed = fixed.replace(/<Cell>/gi, "<mxCell>")
|
||||
fixed = fixed.replace(/<\/Cell>/gi, "</mxCell>")
|
||||
fixes.push("Fixed <Cell> tags to <mxCell>")
|
||||
}
|
||||
|
||||
// 9. Fix common closing tag typos
|
||||
const tagTypos = [
|
||||
{ wrong: /<\/mxElement>/gi, right: "</mxCell>", name: "</mxElement>" },
|
||||
{ wrong: /<\/mxcell>/g, right: "</mxCell>", name: "</mxcell>" }, // case sensitivity
|
||||
{
|
||||
wrong: /<\/mxgeometry>/g,
|
||||
right: "</mxGeometry>",
|
||||
name: "</mxgeometry>",
|
||||
},
|
||||
{ wrong: /<\/mxpoint>/g, right: "</mxPoint>", name: "</mxpoint>" },
|
||||
{
|
||||
wrong: /<\/mxgraphmodel>/gi,
|
||||
right: "</mxGraphModel>",
|
||||
name: "</mxgraphmodel>",
|
||||
},
|
||||
]
|
||||
for (const { wrong, right, name } of tagTypos) {
|
||||
if (wrong.test(fixed)) {
|
||||
fixed = fixed.replace(wrong, right)
|
||||
fixes.push(`Fixed typo ${name} to ${right}`)
|
||||
}
|
||||
}
|
||||
|
||||
// 10. Fix unclosed tags by appending missing closing tags
|
||||
// Use parseXmlTags helper to track open tags
|
||||
const tagStack: string[] = []
|
||||
const parsedTags = parseXmlTags(fixed)
|
||||
|
||||
for (const { tagName, isClosing, isSelfClosing } of parsedTags) {
|
||||
if (isClosing) {
|
||||
// Find matching opening tag (may not be the last one if there's mismatch)
|
||||
const lastIdx = tagStack.lastIndexOf(tagName)
|
||||
if (lastIdx !== -1) {
|
||||
tagStack.splice(lastIdx, 1)
|
||||
}
|
||||
} else if (!isSelfClosing) {
|
||||
tagStack.push(tagName)
|
||||
}
|
||||
}
|
||||
|
||||
// If there are unclosed tags, append closing tags in reverse order
|
||||
// But first verify with simple count that they're actually unclosed
|
||||
if (tagStack.length > 0) {
|
||||
const tagsToClose: string[] = []
|
||||
for (const tagName of tagStack.reverse()) {
|
||||
// Simple count check: only close if opens > closes
|
||||
const openCount = (
|
||||
fixed.match(new RegExp(`<${tagName}[\\s>]`, "gi")) || []
|
||||
).length
|
||||
const closeCount = (
|
||||
fixed.match(new RegExp(`</${tagName}>`, "gi")) || []
|
||||
).length
|
||||
if (openCount > closeCount) {
|
||||
tagsToClose.push(tagName)
|
||||
}
|
||||
}
|
||||
if (tagsToClose.length > 0) {
|
||||
const closingTags = tagsToClose.map((t) => `</${t}>`).join("\n")
|
||||
fixed = fixed.trimEnd() + "\n" + closingTags
|
||||
fixes.push(
|
||||
`Closed ${tagsToClose.length} unclosed tag(s): ${tagsToClose.join(", ")}`,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// 11. Fix nested mxCell by flattening
|
||||
// Pattern A: <mxCell id="X">...<mxCell id="X">...</mxCell></mxCell> (duplicate ID)
|
||||
// Pattern B: <mxCell id="X">...<mxCell id="Y">...</mxCell></mxCell> (different ID - true nesting)
|
||||
const lines = fixed.split("\n")
|
||||
let newLines: string[] = []
|
||||
let nestedFixed = 0
|
||||
let extraClosingToRemove = 0
|
||||
|
||||
// First pass: fix duplicate ID nesting (same as before)
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i]
|
||||
const nextLine = lines[i + 1]
|
||||
|
||||
// Check if current line and next line are both mxCell opening tags with same ID
|
||||
if (
|
||||
nextLine &&
|
||||
/<mxCell\s/.test(line) &&
|
||||
/<mxCell\s/.test(nextLine) &&
|
||||
!line.includes("/>") &&
|
||||
!nextLine.includes("/>")
|
||||
) {
|
||||
const id1 = line.match(/\bid\s*=\s*["']([^"']+)["']/)?.[1]
|
||||
const id2 = nextLine.match(/\bid\s*=\s*["']([^"']+)["']/)?.[1]
|
||||
|
||||
if (id1 && id1 === id2) {
|
||||
nestedFixed++
|
||||
extraClosingToRemove++ // Need to remove one </mxCell> later
|
||||
continue // Skip this duplicate opening line
|
||||
}
|
||||
}
|
||||
|
||||
// Remove extra </mxCell> if we have pending removals
|
||||
if (extraClosingToRemove > 0 && /^\s*<\/mxCell>\s*$/.test(line)) {
|
||||
extraClosingToRemove--
|
||||
continue // Skip this closing tag
|
||||
}
|
||||
|
||||
newLines.push(line)
|
||||
}
|
||||
|
||||
if (nestedFixed > 0) {
|
||||
fixed = newLines.join("\n")
|
||||
fixes.push(`Flattened ${nestedFixed} duplicate-ID nested mxCell(s)`)
|
||||
}
|
||||
|
||||
// Second pass: fix true nesting (different IDs)
|
||||
// Insert </mxCell> before nested child to close parent
|
||||
const lines2 = fixed.split("\n")
|
||||
newLines = []
|
||||
let trueNestedFixed = 0
|
||||
let cellDepth = 0
|
||||
let pendingCloseRemoval = 0
|
||||
|
||||
for (let i = 0; i < lines2.length; i++) {
|
||||
const line = lines2[i]
|
||||
const trimmed = line.trim()
|
||||
|
||||
// Track mxCell depth
|
||||
const isOpenCell = /<mxCell\s/.test(trimmed) && !trimmed.endsWith("/>")
|
||||
const isCloseCell = trimmed === "</mxCell>"
|
||||
|
||||
if (isOpenCell) {
|
||||
if (cellDepth > 0) {
|
||||
// Found nested cell - insert closing tag for parent before this line
|
||||
const indent = line.match(/^(\s*)/)?.[1] || ""
|
||||
newLines.push(indent + "</mxCell>")
|
||||
trueNestedFixed++
|
||||
pendingCloseRemoval++ // Need to remove one </mxCell> later
|
||||
}
|
||||
cellDepth = 1 // Reset to 1 since we just opened a new cell
|
||||
newLines.push(line)
|
||||
} else if (isCloseCell) {
|
||||
if (pendingCloseRemoval > 0) {
|
||||
pendingCloseRemoval--
|
||||
// Skip this extra closing tag
|
||||
} else {
|
||||
cellDepth = Math.max(0, cellDepth - 1)
|
||||
newLines.push(line)
|
||||
}
|
||||
} else {
|
||||
newLines.push(line)
|
||||
}
|
||||
}
|
||||
|
||||
if (trueNestedFixed > 0) {
|
||||
fixed = newLines.join("\n")
|
||||
fixes.push(`Fixed ${trueNestedFixed} true nested mxCell(s)`)
|
||||
}
|
||||
|
||||
// 12. Fix duplicate IDs by appending suffix
|
||||
const seenIds = new Map<string, number>()
|
||||
const duplicateIds: string[] = []
|
||||
|
||||
// First pass: find duplicates
|
||||
const idPattern = /\bid\s*=\s*["']([^"']+)["']/gi
|
||||
let idMatch
|
||||
while ((idMatch = idPattern.exec(fixed)) !== null) {
|
||||
const id = idMatch[1]
|
||||
seenIds.set(id, (seenIds.get(id) || 0) + 1)
|
||||
}
|
||||
|
||||
// Find which IDs are duplicated
|
||||
for (const [id, count] of seenIds) {
|
||||
if (count > 1) duplicateIds.push(id)
|
||||
}
|
||||
|
||||
// Second pass: rename duplicates (keep first occurrence, rename others)
|
||||
if (duplicateIds.length > 0) {
|
||||
const idCounters = new Map<string, number>()
|
||||
fixed = fixed.replace(/\bid\s*=\s*["']([^"']+)["']/gi, (match, id) => {
|
||||
if (!duplicateIds.includes(id)) return match
|
||||
|
||||
const count = idCounters.get(id) || 0
|
||||
idCounters.set(id, count + 1)
|
||||
|
||||
if (count === 0) return match // Keep first occurrence
|
||||
|
||||
// Rename subsequent occurrences
|
||||
const newId = `${id}_dup${count}`
|
||||
return match.replace(id, newId)
|
||||
})
|
||||
fixes.push(`Renamed ${duplicateIds.length} duplicate ID(s)`)
|
||||
}
|
||||
|
||||
// 9. Fix empty id attributes by generating unique IDs
|
||||
let emptyIdCount = 0
|
||||
fixed = fixed.replace(
|
||||
/<mxCell([^>]*)\sid\s*=\s*["']\s*["']([^>]*)>/g,
|
||||
(_match, before, after) => {
|
||||
emptyIdCount++
|
||||
const newId = `cell_${Date.now()}_${emptyIdCount}`
|
||||
return `<mxCell${before} id="${newId}"${after}>`
|
||||
},
|
||||
)
|
||||
if (emptyIdCount > 0) {
|
||||
fixes.push(`Generated ${emptyIdCount} missing ID(s)`)
|
||||
}
|
||||
|
||||
// 13. Aggressive: drop broken mxCell elements that can't be fixed
|
||||
// Only do this if DOM parser still finds errors after all other fixes
|
||||
if (typeof DOMParser !== "undefined") {
|
||||
let droppedCells = 0
|
||||
let maxIterations = MAX_DROP_ITERATIONS
|
||||
while (maxIterations-- > 0) {
|
||||
const parser = new DOMParser()
|
||||
const doc = parser.parseFromString(fixed, "text/xml")
|
||||
const parseError = doc.querySelector("parsererror")
|
||||
if (!parseError) break // Valid now!
|
||||
|
||||
const errText = parseError.textContent || ""
|
||||
const match = errText.match(/(\d+):\d+:/)
|
||||
if (!match) break
|
||||
|
||||
const errLine = parseInt(match[1], 10) - 1
|
||||
const lines = fixed.split("\n")
|
||||
|
||||
// Find the mxCell containing this error line
|
||||
let cellStart = errLine
|
||||
let cellEnd = errLine
|
||||
|
||||
// Go back to find <mxCell
|
||||
while (cellStart > 0 && !lines[cellStart].includes("<mxCell")) {
|
||||
cellStart--
|
||||
}
|
||||
|
||||
// Go forward to find </mxCell> or />
|
||||
while (cellEnd < lines.length - 1) {
|
||||
if (
|
||||
lines[cellEnd].includes("</mxCell>") ||
|
||||
lines[cellEnd].trim().endsWith("/>")
|
||||
) {
|
||||
break
|
||||
}
|
||||
cellEnd++
|
||||
}
|
||||
|
||||
// Remove these lines
|
||||
lines.splice(cellStart, cellEnd - cellStart + 1)
|
||||
fixed = lines.join("\n")
|
||||
droppedCells++
|
||||
}
|
||||
if (droppedCells > 0) {
|
||||
fixes.push(`Dropped ${droppedCells} unfixable mxCell element(s)`)
|
||||
}
|
||||
}
|
||||
|
||||
return { fixed, fixes }
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates XML and attempts to fix if invalid
|
||||
* @param xml - The XML string to validate and potentially fix
|
||||
* @returns Object with validation result, fixed XML if applicable, and fixes applied
|
||||
*/
|
||||
export function validateAndFixXml(xml: string): {
|
||||
valid: boolean
|
||||
error: string | null
|
||||
fixed: string | null
|
||||
fixes: string[]
|
||||
} {
|
||||
// First validation attempt
|
||||
let error = validateMxCellStructure(xml)
|
||||
|
||||
if (!error) {
|
||||
return { valid: true, error: null, fixed: null, fixes: [] }
|
||||
}
|
||||
|
||||
// Try to fix
|
||||
const { fixed, fixes } = autoFixXml(xml)
|
||||
|
||||
// Validate the fixed version
|
||||
error = validateMxCellStructure(fixed)
|
||||
|
||||
if (!error) {
|
||||
return { valid: true, error: null, fixed, fixes }
|
||||
}
|
||||
|
||||
// Still invalid after fixes
|
||||
return { valid: false, error, fixed: null, fixes }
|
||||
}
|
||||
|
||||
export function extractDiagramXML(xml_svg_string: string): string {
|
||||
try {
|
||||
// 1. Parse the SVG string (using built-in DOMParser in a browser-like environment)
|
||||
|
||||
3
open-next.config.ts
Normal file
3
open-next.config.ts
Normal file
@@ -0,0 +1,3 @@
|
||||
import { defineCloudflareConfig } from "@opennextjs/cloudflare"
|
||||
|
||||
export default defineCloudflareConfig()
|
||||
8896
package-lock.json
generated
8896
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
24
package.json
24
package.json
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "next-ai-draw-io",
|
||||
"version": "0.4.0",
|
||||
"version": "0.3.0",
|
||||
"license": "Apache-2.0",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
@@ -10,10 +10,14 @@
|
||||
"lint": "biome lint .",
|
||||
"format": "biome check --write .",
|
||||
"check": "biome ci",
|
||||
"prepare": "husky"
|
||||
"prepare": "husky",
|
||||
"cf:build": "opennextjs-cloudflare build",
|
||||
"cf:preview": "opennextjs-cloudflare build && opennextjs-cloudflare preview",
|
||||
"cf:deploy": "opennextjs-cloudflare build && opennextjs-cloudflare deploy",
|
||||
"cf:typegen": "wrangler types --env-interface CloudflareEnv cloudflare-env.d.ts"
|
||||
},
|
||||
"dependencies": {
|
||||
"@ai-sdk/amazon-bedrock": "^3.0.70",
|
||||
"@ai-sdk/amazon-bedrock": "^3.0.62",
|
||||
"@ai-sdk/anthropic": "^2.0.44",
|
||||
"@ai-sdk/azure": "^2.0.69",
|
||||
"@ai-sdk/deepseek": "^1.0.30",
|
||||
@@ -25,10 +29,10 @@
|
||||
"@langfuse/otel": "^4.4.4",
|
||||
"@langfuse/tracing": "^4.4.9",
|
||||
"@next/third-parties": "^16.0.6",
|
||||
"@opennextjs/cloudflare": "^1.14.4",
|
||||
"@openrouter/ai-sdk-provider": "^1.2.3",
|
||||
"@opentelemetry/exporter-trace-otlp-http": "^0.208.0",
|
||||
"@opentelemetry/sdk-trace-node": "^2.2.0",
|
||||
"@radix-ui/react-collapsible": "^1.1.12",
|
||||
"@radix-ui/react-dialog": "^1.1.6",
|
||||
"@radix-ui/react-label": "^2.1.8",
|
||||
"@radix-ui/react-scroll-area": "^1.2.3",
|
||||
@@ -36,22 +40,20 @@
|
||||
"@radix-ui/react-slot": "^1.1.2",
|
||||
"@radix-ui/react-switch": "^1.2.6",
|
||||
"@radix-ui/react-tooltip": "^1.1.8",
|
||||
"@radix-ui/react-use-controllable-state": "^1.2.2",
|
||||
"@vercel/analytics": "^1.5.0",
|
||||
"@xmldom/xmldom": "^0.9.8",
|
||||
"ai": "^5.0.89",
|
||||
"base-64": "^1.0.0",
|
||||
"class-variance-authority": "^0.7.1",
|
||||
"clsx": "^2.1.1",
|
||||
"js-tiktoken": "^1.0.21",
|
||||
"jsdom": "^26.0.0",
|
||||
"lucide-react": "^0.483.0",
|
||||
"motion": "^12.23.25",
|
||||
"next": "^16.0.7",
|
||||
"ollama-ai-provider-v2": "^1.5.4",
|
||||
"pako": "^2.1.0",
|
||||
"prism-react-renderer": "^2.4.1",
|
||||
"react": "^19.1.2",
|
||||
"react-dom": "^19.1.2",
|
||||
"react": "^19.0.0",
|
||||
"react-dom": "^19.0.0",
|
||||
"react-drawio": "^1.0.3",
|
||||
"react-icons": "^5.5.0",
|
||||
"react-markdown": "^10.1.0",
|
||||
@@ -60,7 +62,6 @@
|
||||
"sonner": "^2.0.7",
|
||||
"tailwind-merge": "^3.0.2",
|
||||
"tailwindcss-animate": "^1.0.7",
|
||||
"unpdf": "^1.4.0",
|
||||
"zod": "^4.1.12"
|
||||
},
|
||||
"lint-staged": {
|
||||
@@ -83,6 +84,7 @@
|
||||
"husky": "^9.1.7",
|
||||
"lint-staged": "^16.2.7",
|
||||
"tailwindcss": "^4",
|
||||
"typescript": "^5"
|
||||
"typescript": "^5",
|
||||
"wrangler": "^4.53.0"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,65 +0,0 @@
|
||||
Here is an extended summary of the paper **"Chain-of-Thought Prompting Elicits Reasoning in Large Language Models"** by Jason Wei, et al. This detailed overview covers the background, methodology, extensive experimental results, emergent properties, and qualitative analysis found in the study.
|
||||
|
||||
### **1. Introduction and Motivation**
|
||||
The paper addresses a significant limitation in Large Language Models (LLMs): while scaling up model size (increasing parameters) has revolutionized performance on standard NLP tasks, it has not proven sufficient for challenging logical tasks such as arithmetic, commonsense, and symbolic reasoning.
|
||||
|
||||
Traditional techniques to solve these problems fell into two camps:
|
||||
1. **Finetuning:** Training models manually with large datasets of explanations (expensive and task-specific).
|
||||
2. **Standard Few-Shot Prompting:** Providing input-output pairs (e.g., Question $\rightarrow$ Answer) without explaining *how* the answer was derived. This often fails on multi-step problems.
|
||||
|
||||
The authors introduce **Chain-of-Thought (CoT) Prompting**, a simple method that combines the strengths of both approaches. It leverages the model's existing capabilities to generate natural language rationales without requiring any model parameter updates (finetuning).
|
||||
|
||||
### **2. Methodology: What is Chain-of-Thought?**
|
||||
The core innovation is changing the structure of the "exemplars" (the few-shot examples included in the prompt).
|
||||
* **Standard Prompting:** The model is shown a question and an immediate answer.
|
||||
* *Q: Roger has 5 balls. He buys 2 cans of 3 balls. How many now?*
|
||||
* *A: 11.*
|
||||
* **Chain-of-Thought Prompting:** The model is shown a question, followed by a series of intermediate natural language reasoning steps that lead to the answer.
|
||||
* *A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11.*
|
||||
|
||||
By interacting with the model using this format, the LLM learns to generate its own "thought process" for new, unseen questions. This allows the model to decompose complex problems into manageable intermediate steps.
|
||||
|
||||
### **3. Experimental Setup**
|
||||
The researchers evaluated CoT prompting on several large language models, including **GPT-3 (175B)**, **LaMDA (137B)**, **PaLM (540B)**, **UL2 (20B)**, and **Codex**. They tested across three distinct domains of reasoning:
|
||||
* **Arithmetic Reasoning:** Using benchmarks like **GSM8K** (math word problems), **SVAMP**, **ASDiv**, **AQuA**, and **MAWPS**.
|
||||
* **Commonsense Reasoning:** Using datasets like **CSQA**, **StrategyQA**, **Date Understanding**, and **Sports Understanding**.
|
||||
* **Symbolic Reasoning:** Using tasks like **Last Letter Concatenation** and **Coin Flip** tracking (determining if a coin is heads or tails after a sequence of flips).
|
||||
|
||||
### **4. Key Findings and Results**
|
||||
|
||||
#### **Arithmetic Reasoning**
|
||||
The results on math word problems were striking. Standard prompting struggled significantly, often exhibiting a flat scaling curve (performance didn't improve much even as models got bigger).
|
||||
* **Performance Jump:** On the difficult **GSM8K** benchmark, **PaLM 540B** with CoT prompting achieved **56.9%** accuracy, compared to just 17.9% with standard prompting.
|
||||
* **Surpassing State-of-the-Art:** PaLM 540B with CoT outperformed a previously finetuned GPT-3 model (55%), establishing a new state-of-the-art without needing a training set.
|
||||
* **Calculator Integration:** The authors noted that some errors were simple calculation mistakes in otherwise correct logic. By hooking the CoT output into an external Python calculator, accuracy on GSM8K rose further to **58.6%**.
|
||||
|
||||
#### **Commonsense Reasoning**
|
||||
CoT prompting improved performance on tasks requiring background knowledge and physical intuition.
|
||||
* **StrategyQA:** PaLM 540B achieved **75.6%** accuracy via CoT, beating the prior state-of-the-art (69.4%).
|
||||
* **Sports Understanding:** The model achieved **95.4%** accuracy, surpassing the performance of an unaided sports enthusiast (84%).
|
||||
* The gains were minimal on CSQA, likely because many questions in that dataset did not require multi-step logic.
|
||||
|
||||
#### **Symbolic Reasoning and Generalization**
|
||||
A unique strength of CoT was enabling **Out-of-Domain (OOD) Generalization**.
|
||||
* In the **Coin Flip** task, the models were given examples with only 2 flips. However, using CoT, the models could successfully track coins flipped 3 or 4 times.
|
||||
* Standard prompting failed completely on these longer sequences, while CoT allowed the model to repeat the logical steps as many times as necessary to reach the solution.
|
||||
|
||||
### **5. Emergent Ability of Scale**
|
||||
One of the paper's most critical insights is that CoT reasoning is an **emergent ability** that depends on model size.
|
||||
* **Small Models (<10B parameters):** CoT prompting provided **no benefit** and often hurt performance. Small models produced fluent but illogical chains of thought (hallucinations) or suffered from repetition.
|
||||
* **Large Models (~100B+ parameters):** The ability to reason sequentially emerges at this scale. The performance gains from CoT are negligible for small models but increase dramatically for models like GPT-3 (175B) and PaLM (540B).
|
||||
|
||||
### **6. Why Does It Work? (Ablation Studies)**
|
||||
To ensure the improvement was due to the reasoning steps and not other factors, the authors conducted three specific ablations:
|
||||
1. **Equation Only:** They prompted the model to output just the math equation without words. This performed worse than CoT, suggesting that natural language helps the model "understand" the question semantics.
|
||||
2. **Variable Compute:** They prompted the model to output dots (...) to consume compute time before answering. This yielded no improvement, proving that the *content* of the reasoning steps matters, not just the extra tokens.
|
||||
3. **Reasoning After Answer:** They asked the model to give the answer first, then the explanation. This performed about the same as the baseline, proving that the chain of thought must come *before* the answer to guide the model's inference process.
|
||||
|
||||
### **7. Error Analysis and Robustness**
|
||||
The authors manually analyzed errors made by the models.
|
||||
* **Error Types:** In math problems, errors were categorized as **Semantic Understanding** (misunderstanding the question), **One-Step Missing** (skipping a logical step), or **Calculation Errors**.
|
||||
* **Impact of Scale:** Scaling from PaLM 62B to PaLM 540B significantly reduced semantic and missing-step errors, confirming that larger models are better at logic, not just memorization.
|
||||
* **Robustness:** The method proved robust to different annotators (different people writing the prompts) and different specific examples, though, like all prompting, different prompt styles did result in some variance.
|
||||
|
||||
### **Conclusion**
|
||||
The paper establishes Chain-of-Thought prompting as a powerful paradigm for unlocking the reasoning potential of Large Language Models. By simply asking the model to "show its work," researchers can elicit complex logical behaviors that were previously thought to require specialized architectures or extensive finetuning. The work highlights that reasoning is an emergent capability of sufficiently large language models.
|
||||
@@ -1,4 +0,0 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="140" height="36" viewBox="0 0 140 36">
|
||||
<rect width="140" height="36" rx="8" fill="#6366f1"/>
|
||||
<text x="70" y="24" font-family="-apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif" font-size="15" font-weight="600" fill="white" text-anchor="middle">🚀 Live Demo</text>
|
||||
</svg>
|
||||
|
Before Width: | Height: | Size: 340 B |
@@ -24,6 +24,7 @@
|
||||
},
|
||||
"include": [
|
||||
"next-env.d.ts",
|
||||
"cloudflare-env.d.ts",
|
||||
"**/*.ts",
|
||||
"**/*.tsx",
|
||||
".next/types/**/*.ts",
|
||||
|
||||
12
vercel.json
12
vercel.json
@@ -1,12 +0,0 @@
|
||||
{
|
||||
"functions": {
|
||||
"app/api/chat/route.ts": {
|
||||
"memory": 512,
|
||||
"maxDuration": 120
|
||||
},
|
||||
"app/api/**/route.ts": {
|
||||
"memory": 256,
|
||||
"maxDuration": 10
|
||||
}
|
||||
}
|
||||
}
|
||||
8
wrangler.toml
Normal file
8
wrangler.toml
Normal file
@@ -0,0 +1,8 @@
|
||||
main = ".open-next/worker.js"
|
||||
name = "next-ai-draw-io"
|
||||
compatibility_date = "2024-09-23"
|
||||
compatibility_flags = ["nodejs_compat"]
|
||||
|
||||
[assets]
|
||||
directory = ".open-next/assets"
|
||||
binding = "ASSETS"
|
||||
Reference in New Issue
Block a user