mirror of
https://github.com/DayuanJiang/next-ai-draw-io.git
synced 2026-01-02 14:22:28 +08:00
docs: reorganize documentation into i18n folder structure (#466)
* docs: reorganize docs into en/cn/ja folders - Move documentation files into language-specific folders (en, cn, ja) - Add Chinese and Japanese translations for all docs - Extract Docker section from README to separate doc file - Update README to link to new doc locations * docs: fix links to new docs folder structure * docs: update README and provider docs * docs: fix broken import statements in cloudflare deploy guides * docs: sync CN/JA READMEs with EN structure and fix all paths
This commit is contained in:
236
docs/en/ai-providers.md
Normal file
236
docs/en/ai-providers.md
Normal file
@@ -0,0 +1,236 @@
|
||||
# AI Provider Configuration
|
||||
|
||||
This guide explains how to configure different AI model providers for next-ai-draw-io.
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. Copy `.env.example` to `.env.local`
|
||||
2. Set your API key for your chosen provider
|
||||
3. Set `AI_MODEL` to your desired model
|
||||
4. Run `npm run dev`
|
||||
|
||||
## Supported Providers
|
||||
|
||||
### Doubao (ByteDance Volcengine)
|
||||
|
||||
> **Free tokens**: Register on the [Volcengine ARK platform](https://console.volcengine.com/ark/region:ark+cn-beijing/overview?briefPage=0&briefType=introduce&type=new&utm_campaign=doubao&utm_content=aidrawio&utm_medium=github&utm_source=coopensrc&utm_term=project) to get 500K free tokens for all models!
|
||||
|
||||
```bash
|
||||
DOUBAO_API_KEY=your_api_key
|
||||
AI_MODEL=doubao-seed-1-8-251215 # or other Doubao model
|
||||
```
|
||||
|
||||
### Google Gemini
|
||||
|
||||
```bash
|
||||
GOOGLE_GENERATIVE_AI_API_KEY=your_api_key
|
||||
AI_MODEL=gemini-2.0-flash
|
||||
```
|
||||
|
||||
Optional custom endpoint:
|
||||
|
||||
```bash
|
||||
GOOGLE_BASE_URL=https://your-custom-endpoint
|
||||
```
|
||||
|
||||
### OpenAI
|
||||
|
||||
```bash
|
||||
OPENAI_API_KEY=your_api_key
|
||||
AI_MODEL=gpt-4o
|
||||
```
|
||||
|
||||
Optional custom endpoint (for OpenAI-compatible services):
|
||||
|
||||
```bash
|
||||
OPENAI_BASE_URL=https://your-custom-endpoint/v1
|
||||
```
|
||||
|
||||
### Anthropic
|
||||
|
||||
```bash
|
||||
ANTHROPIC_API_KEY=your_api_key
|
||||
AI_MODEL=claude-sonnet-4-5-20250514
|
||||
```
|
||||
|
||||
Optional custom endpoint:
|
||||
|
||||
```bash
|
||||
ANTHROPIC_BASE_URL=https://your-custom-endpoint
|
||||
```
|
||||
|
||||
### DeepSeek
|
||||
|
||||
```bash
|
||||
DEEPSEEK_API_KEY=your_api_key
|
||||
AI_MODEL=deepseek-chat
|
||||
```
|
||||
|
||||
Optional custom endpoint:
|
||||
|
||||
```bash
|
||||
DEEPSEEK_BASE_URL=https://your-custom-endpoint
|
||||
```
|
||||
|
||||
### SiliconFlow (OpenAI-compatible)
|
||||
|
||||
```bash
|
||||
SILICONFLOW_API_KEY=your_api_key
|
||||
AI_MODEL=deepseek-ai/DeepSeek-V3 # example; use any SiliconFlow model id
|
||||
```
|
||||
|
||||
Optional custom endpoint (defaults to the recommended domain):
|
||||
|
||||
```bash
|
||||
SILICONFLOW_BASE_URL=https://api.siliconflow.com/v1 # or https://api.siliconflow.cn/v1
|
||||
```
|
||||
|
||||
### SGLang
|
||||
|
||||
```bash
|
||||
SGLANG_API_KEY=your_api_key
|
||||
AI_MODEL=your_model_id
|
||||
```
|
||||
|
||||
Optional custom endpoint:
|
||||
|
||||
```bash
|
||||
SGLANG_BASE_URL=https://your-custom-endpoint/v1
|
||||
```
|
||||
|
||||
### Azure OpenAI
|
||||
|
||||
```bash
|
||||
AZURE_API_KEY=your_api_key
|
||||
AZURE_RESOURCE_NAME=your-resource-name # Required: your Azure resource name
|
||||
AI_MODEL=your-deployment-name
|
||||
```
|
||||
|
||||
Or use a custom endpoint instead of resource name:
|
||||
|
||||
```bash
|
||||
AZURE_API_KEY=your_api_key
|
||||
AZURE_BASE_URL=https://your-resource.openai.azure.com # Alternative to AZURE_RESOURCE_NAME
|
||||
AI_MODEL=your-deployment-name
|
||||
```
|
||||
|
||||
Optional reasoning configuration:
|
||||
|
||||
```bash
|
||||
AZURE_REASONING_EFFORT=low # Optional: low, medium, high
|
||||
AZURE_REASONING_SUMMARY=detailed # Optional: none, brief, detailed
|
||||
```
|
||||
|
||||
### AWS Bedrock
|
||||
|
||||
```bash
|
||||
AWS_REGION=us-west-2
|
||||
AWS_ACCESS_KEY_ID=your_access_key_id
|
||||
AWS_SECRET_ACCESS_KEY=your_secret_access_key
|
||||
AI_MODEL=anthropic.claude-sonnet-4-5-20250514-v1:0
|
||||
```
|
||||
|
||||
Note: On AWS (Lambda, EC2 with IAM role), credentials are automatically obtained from the IAM role.
|
||||
|
||||
### OpenRouter
|
||||
|
||||
```bash
|
||||
OPENROUTER_API_KEY=your_api_key
|
||||
AI_MODEL=anthropic/claude-sonnet-4
|
||||
```
|
||||
|
||||
Optional custom endpoint:
|
||||
|
||||
```bash
|
||||
OPENROUTER_BASE_URL=https://your-custom-endpoint
|
||||
```
|
||||
|
||||
### Ollama (Local)
|
||||
|
||||
```bash
|
||||
AI_PROVIDER=ollama
|
||||
AI_MODEL=llama3.2
|
||||
```
|
||||
|
||||
Optional custom URL:
|
||||
|
||||
```bash
|
||||
OLLAMA_BASE_URL=http://localhost:11434
|
||||
```
|
||||
|
||||
### Vercel AI Gateway
|
||||
|
||||
Vercel AI Gateway provides unified access to multiple AI providers through a single API key. This simplifies authentication and allows you to switch between providers without managing multiple API keys.
|
||||
|
||||
**Basic Usage (Vercel-hosted Gateway):**
|
||||
|
||||
```bash
|
||||
AI_GATEWAY_API_KEY=your_gateway_api_key
|
||||
AI_MODEL=openai/gpt-4o
|
||||
```
|
||||
|
||||
**Custom Gateway URL (for local development or self-hosted Gateway):**
|
||||
|
||||
```bash
|
||||
AI_GATEWAY_API_KEY=your_custom_api_key
|
||||
AI_GATEWAY_BASE_URL=https://your-custom-gateway.com/v1/ai
|
||||
AI_MODEL=openai/gpt-4o
|
||||
```
|
||||
|
||||
Model format uses `provider/model` syntax:
|
||||
|
||||
- `openai/gpt-4o` - OpenAI GPT-4o
|
||||
- `anthropic/claude-sonnet-4-5` - Anthropic Claude Sonnet 4.5
|
||||
- `google/gemini-2.0-flash` - Google Gemini 2.0 Flash
|
||||
|
||||
**Configuration notes:**
|
||||
|
||||
- If `AI_GATEWAY_BASE_URL` is not set, the default Vercel Gateway URL (`https://ai-gateway.vercel.sh/v1/ai`) is used
|
||||
- Custom base URL is useful for:
|
||||
- Local development with a custom Gateway instance
|
||||
- Self-hosted AI Gateway deployments
|
||||
- Enterprise proxy configurations
|
||||
- When using a custom base URL, you must also provide `AI_GATEWAY_API_KEY`
|
||||
|
||||
Get your API key from the [Vercel AI Gateway dashboard](https://vercel.com/ai-gateway).
|
||||
|
||||
## Auto-Detection
|
||||
|
||||
If you only configure **one** provider's API key, the system will automatically detect and use that provider. No need to set `AI_PROVIDER`.
|
||||
|
||||
If you configure **multiple** API keys, you must explicitly set `AI_PROVIDER`:
|
||||
|
||||
```bash
|
||||
AI_PROVIDER=google # or: openai, anthropic, deepseek, siliconflow, doubao, azure, bedrock, openrouter, ollama, gateway, sglang
|
||||
```
|
||||
|
||||
## Model Capability Requirements
|
||||
|
||||
This task requires exceptionally strong model capabilities, as it involves generating long-form text with strict formatting constraints (draw.io XML).
|
||||
|
||||
**Recommended models**:
|
||||
|
||||
- Claude Sonnet 4.5 / Opus 4.5
|
||||
|
||||
**Note on Ollama**: While Ollama is supported as a provider, it's generally not practical for this use case unless you're running high-capability models like DeepSeek R1 or Qwen3-235B locally.
|
||||
|
||||
## Temperature Setting
|
||||
|
||||
You can optionally configure the temperature via environment variable:
|
||||
|
||||
```bash
|
||||
TEMPERATURE=0 # More deterministic output (recommended for diagrams)
|
||||
```
|
||||
|
||||
**Important**: Leave `TEMPERATURE` unset for models that don't support temperature settings, such as:
|
||||
- GPT-5.1 and other reasoning models
|
||||
- Some specialized models
|
||||
|
||||
When unset, the model uses its default behavior.
|
||||
|
||||
## Recommendations
|
||||
|
||||
- **Best experience**: Use models with vision support (GPT-4o, Claude, Gemini) for image-to-diagram features
|
||||
- **Budget-friendly**: DeepSeek offers competitive pricing
|
||||
- **Privacy**: Use Ollama for fully local, offline operation (requires powerful hardware)
|
||||
- **Flexibility**: OpenRouter provides access to many models through a single API
|
||||
267
docs/en/cloudflare-deploy.md
Normal file
267
docs/en/cloudflare-deploy.md
Normal file
@@ -0,0 +1,267 @@
|
||||
# Deploy on Cloudflare Workers
|
||||
|
||||
This project can be deployed as a **Cloudflare Worker** using the **OpenNext adapter**, giving you:
|
||||
|
||||
- Global edge deployment
|
||||
- Very low latency
|
||||
- Free `workers.dev` hosting
|
||||
- Full Next.js ISR support via R2 (optional)
|
||||
|
||||
> **Important Windows Note:** OpenNext and Wrangler are **not fully reliable on native Windows**. Recommended options:
|
||||
>
|
||||
> - Use **GitHub Codespaces** (works perfectly)
|
||||
> - OR use **WSL (Linux)**
|
||||
>
|
||||
> Pure Windows builds may fail due to WASM file path issues.
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. A **Cloudflare account** (free tier works for basic deployment)
|
||||
2. **Node.js 18+**
|
||||
3. **Wrangler CLI** installed (dev dependency is fine):
|
||||
|
||||
```bash
|
||||
npm install -D wrangler
|
||||
```
|
||||
|
||||
4. Cloudflare login:
|
||||
|
||||
```bash
|
||||
npx wrangler login
|
||||
```
|
||||
|
||||
> **Note:** A payment method is only required if you want to enable R2 for ISR caching. Basic Workers deployment is free.
|
||||
|
||||
---
|
||||
|
||||
## Step 1 — Install dependencies
|
||||
|
||||
```bash
|
||||
npm install
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2 — Configure environment variables
|
||||
|
||||
Cloudflare uses a different file for local testing.
|
||||
|
||||
### 1) Create `.dev.vars` (for Cloudflare local + deploy)
|
||||
|
||||
```bash
|
||||
cp env.example .dev.vars
|
||||
```
|
||||
|
||||
Fill in your API keys and configuration.
|
||||
|
||||
### 2) Make sure `.env.local` also exists (for regular Next.js dev)
|
||||
|
||||
```bash
|
||||
cp env.example .env.local
|
||||
```
|
||||
|
||||
Fill in the same values there.
|
||||
|
||||
---
|
||||
|
||||
## Step 3 — Choose your deployment type
|
||||
|
||||
### Option A: Deploy WITHOUT R2 (Simple, Free)
|
||||
|
||||
If you don't need ISR caching, you can deploy without R2:
|
||||
|
||||
**1. Use simple `open-next.config.ts`:**
|
||||
|
||||
```ts
|
||||
import { defineCloudflareConfig } from "@opennextjs/cloudflare/config"
|
||||
|
||||
export default defineCloudflareConfig({})
|
||||
```
|
||||
|
||||
**2. Use simple `wrangler.jsonc` (without r2_buckets):**
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"$schema": "node_modules/wrangler/config-schema.json",
|
||||
"main": ".open-next/worker.js",
|
||||
"name": "next-ai-draw-io-worker",
|
||||
"compatibility_date": "2025-12-08",
|
||||
"compatibility_flags": ["nodejs_compat", "global_fetch_strictly_public"],
|
||||
"assets": {
|
||||
"directory": ".open-next/assets",
|
||||
"binding": "ASSETS"
|
||||
},
|
||||
"services": [
|
||||
{
|
||||
"binding": "WORKER_SELF_REFERENCE",
|
||||
"service": "next-ai-draw-io-worker"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Skip to **Step 4**.
|
||||
|
||||
---
|
||||
|
||||
### Option B: Deploy WITH R2 (Full ISR Support)
|
||||
|
||||
R2 enables **Incremental Static Regeneration (ISR)** caching. Requires a payment method on your Cloudflare account.
|
||||
|
||||
**1. Create an R2 bucket** in the Cloudflare Dashboard:
|
||||
|
||||
- Go to **Storage & Databases → R2**
|
||||
- Click **Create bucket**
|
||||
- Name it: `next-inc-cache`
|
||||
|
||||
**2. Configure `open-next.config.ts`:**
|
||||
|
||||
```ts
|
||||
import { defineCloudflareConfig } from "@opennextjs/cloudflare/config"
|
||||
import r2IncrementalCache from "@opennextjs/cloudflare/overrides/incremental-cache/r2-incremental-cache"
|
||||
|
||||
export default defineCloudflareConfig({
|
||||
incrementalCache: r2IncrementalCache,
|
||||
})
|
||||
```
|
||||
|
||||
**3. Configure `wrangler.jsonc` (with R2):**
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"$schema": "node_modules/wrangler/config-schema.json",
|
||||
"main": ".open-next/worker.js",
|
||||
"name": "next-ai-draw-io-worker",
|
||||
"compatibility_date": "2025-12-08",
|
||||
"compatibility_flags": ["nodejs_compat", "global_fetch_strictly_public"],
|
||||
"assets": {
|
||||
"directory": ".open-next/assets",
|
||||
"binding": "ASSETS"
|
||||
},
|
||||
"r2_buckets": [
|
||||
{
|
||||
"binding": "NEXT_INC_CACHE_R2_BUCKET",
|
||||
"bucket_name": "next-inc-cache"
|
||||
}
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"binding": "WORKER_SELF_REFERENCE",
|
||||
"service": "next-ai-draw-io-worker"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
> **Important:** The `bucket_name` must exactly match the name you created in the Cloudflare dashboard.
|
||||
|
||||
---
|
||||
|
||||
## Step 4 — Register a workers.dev subdomain (first-time only)
|
||||
|
||||
Before your first deployment, you need a workers.dev subdomain.
|
||||
|
||||
**Option 1: Via Cloudflare Dashboard (Recommended)**
|
||||
|
||||
Visit: https://dash.cloudflare.com → Workers & Pages → Overview → Set up a subdomain
|
||||
|
||||
**Option 2: During deploy**
|
||||
|
||||
When you run `npm run deploy`, Wrangler may prompt:
|
||||
|
||||
```
|
||||
Would you like to register a workers.dev subdomain? (Y/n)
|
||||
```
|
||||
|
||||
Type `Y` and choose a subdomain name.
|
||||
|
||||
> **Note:** In CI/CD or non-interactive environments, the prompt won't appear. Register via the dashboard first.
|
||||
|
||||
---
|
||||
|
||||
## Step 5 — Deploy to Cloudflare
|
||||
|
||||
```bash
|
||||
npm run deploy
|
||||
```
|
||||
|
||||
What the script does:
|
||||
|
||||
- Builds the Next.js app
|
||||
- Converts it to a Cloudflare Worker via OpenNext
|
||||
- Uploads static assets
|
||||
- Publishes the Worker
|
||||
|
||||
Your app will be available at:
|
||||
|
||||
```
|
||||
https://<worker-name>.<your-subdomain>.workers.dev
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common issues & fixes
|
||||
|
||||
### `You need to register a workers.dev subdomain`
|
||||
|
||||
**Cause:** No workers.dev subdomain registered for your account.
|
||||
|
||||
**Fix:** Go to https://dash.cloudflare.com → Workers & Pages → Set up a subdomain.
|
||||
|
||||
---
|
||||
|
||||
### `Please enable R2 through the Cloudflare Dashboard`
|
||||
|
||||
**Cause:** R2 is configured in wrangler.jsonc but not enabled on your account.
|
||||
|
||||
**Fix:** Either enable R2 (requires payment method) or use Option A (deploy without R2).
|
||||
|
||||
---
|
||||
|
||||
### `No R2 binding "NEXT_INC_CACHE_R2_BUCKET" found`
|
||||
|
||||
**Cause:** `r2_buckets` is missing from `wrangler.jsonc`.
|
||||
|
||||
**Fix:** Add the `r2_buckets` section or switch to Option A (without R2).
|
||||
|
||||
---
|
||||
|
||||
### `Can't set compatibility date in the future`
|
||||
|
||||
**Cause:** `compatibility_date` in wrangler config is set to a future date.
|
||||
|
||||
**Fix:** Change `compatibility_date` to today or an earlier date.
|
||||
|
||||
---
|
||||
|
||||
### Windows error: `resvg.wasm?module` (ENOENT)
|
||||
|
||||
**Cause:** Windows filenames cannot include `?`, but a wasm asset uses `?module` in its filename.
|
||||
|
||||
**Fix:** Build/deploy on Linux (WSL, Codespaces, or CI).
|
||||
|
||||
---
|
||||
|
||||
## Optional: Preview locally
|
||||
|
||||
Preview the Worker locally before deploying:
|
||||
|
||||
```bash
|
||||
npm run preview
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Feature | Without R2 | With R2 |
|
||||
|---------|------------|---------|
|
||||
| Cost | Free | Requires payment method |
|
||||
| ISR Caching | No | Yes |
|
||||
| Static Pages | Yes | Yes |
|
||||
| API Routes | Yes | Yes |
|
||||
| Setup Complexity | Simple | Moderate |
|
||||
|
||||
Choose **without R2** for testing or simple apps. Choose **with R2** for production apps that need ISR caching.
|
||||
29
docs/en/docker.md
Normal file
29
docs/en/docker.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Run with Docker
|
||||
|
||||
If you just want to run it locally, the best way is to use Docker.
|
||||
|
||||
First, install Docker if you haven't already: [Get Docker](https://docs.docker.com/get-docker/)
|
||||
|
||||
Then run:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 \
|
||||
-e AI_PROVIDER=openai \
|
||||
-e AI_MODEL=gpt-4o \
|
||||
-e OPENAI_API_KEY=your_api_key \
|
||||
ghcr.io/dayuanjiang/next-ai-draw-io:latest
|
||||
```
|
||||
|
||||
Or use an env file:
|
||||
|
||||
```bash
|
||||
cp env.example .env
|
||||
# Edit .env with your configuration
|
||||
docker run -d -p 3000:3000 --env-file .env ghcr.io/dayuanjiang/next-ai-draw-io:latest
|
||||
```
|
||||
|
||||
Open [http://localhost:3000](http://localhost:3000) in your browser.
|
||||
|
||||
Replace the environment variables with your preferred AI provider configuration. See [AI Providers](./ai-providers.md) for available options.
|
||||
|
||||
> **Offline Deployment:** If `embed.diagrams.net` is blocked, see [Offline Deployment](./offline-deployment.md) for configuration options.
|
||||
39
docs/en/offline-deployment.md
Normal file
39
docs/en/offline-deployment.md
Normal file
@@ -0,0 +1,39 @@
|
||||
# Offline Deployment
|
||||
|
||||
Deploy Next AI Draw.io offline by self-hosting draw.io to replace `embed.diagrams.net`.
|
||||
|
||||
**Note:** `NEXT_PUBLIC_DRAWIO_BASE_URL` is a **build-time** variable. Changing it requires rebuilding the Docker image.
|
||||
|
||||
## Docker Compose Setup
|
||||
|
||||
1. Clone the repository and define API keys in `.env`.
|
||||
2. Create `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
drawio:
|
||||
image: jgraph/drawio:latest
|
||||
ports: ["8080:8080"]
|
||||
next-ai-draw-io:
|
||||
build:
|
||||
context: .
|
||||
args:
|
||||
- NEXT_PUBLIC_DRAWIO_BASE_URL=http://localhost:8080
|
||||
ports: ["3000:3000"]
|
||||
env_file: .env
|
||||
depends_on: [drawio]
|
||||
```
|
||||
|
||||
3. Run `docker compose up -d` and open `http://localhost:3000`.
|
||||
|
||||
## Configuration & Critical Warning
|
||||
|
||||
**The `NEXT_PUBLIC_DRAWIO_BASE_URL` must be accessible from the user's browser.**
|
||||
|
||||
| Scenario | URL Value |
|
||||
|----------|-----------|
|
||||
| Localhost | `http://localhost:8080` |
|
||||
| Remote/Server | `http://YOUR_SERVER_IP:8080` |
|
||||
|
||||
**Do NOT use** internal Docker aliases like `http://drawio:8080`; the browser cannot resolve them.
|
||||
|
||||
Reference in New Issue
Block a user