mirror of
https://github.com/DayuanJiang/next-ai-draw-io.git
synced 2026-01-02 14:22:28 +08:00
docs: add AI provider configuration guide (#100)
- Add docs/ai-providers.md with detailed setup instructions for all providers - Update README.md, README_CN.md, README_JA.md with provider guide links - Add model capability requirements note - Simplify provider list in READMEs Closes #79
This commit is contained in:
10
README.md
10
README.md
@@ -81,7 +81,7 @@ Diagrams are represented as XML that can be rendered in draw.io. The AI processe
|
||||
## Multi-Provider Support
|
||||
|
||||
- AWS Bedrock (default)
|
||||
- OpenAI / OpenAI-compatible APIs (via `OPENAI_BASE_URL`)
|
||||
- OpenAI
|
||||
- Anthropic
|
||||
- Google AI
|
||||
- Azure OpenAI
|
||||
@@ -89,6 +89,12 @@ Diagrams are represented as XML that can be rendered in draw.io. The AI processe
|
||||
- OpenRouter
|
||||
- DeepSeek
|
||||
|
||||
All providers except AWS Bedrock and OpenRouter support custom endpoints.
|
||||
|
||||
📖 **[Detailed Provider Configuration Guide](./docs/ai-providers.md)** - See setup instructions for each provider.
|
||||
|
||||
**Model Requirements**: This task requires strong model capabilities for generating long-form text with strict formatting constraints (draw.io XML). Recommended models include Claude Sonnet 4.5, GPT-4o, Gemini 2.0, and DeepSeek V3/R1.
|
||||
|
||||
Note that `claude-sonnet-4-5` has trained on draw.io diagrams with AWS logos, so if you want to create AWS architecture diagrams, this is the best choice.
|
||||
|
||||
## Getting Started
|
||||
@@ -144,7 +150,7 @@ Edit `.env.local` and configure your chosen provider:
|
||||
- Set `AI_MODEL` to the specific model you want to use
|
||||
- Add the required API keys for your provider
|
||||
|
||||
See the [Multi-Provider Support](#multi-provider-support) section above for provider-specific configuration examples.
|
||||
See the [Provider Configuration Guide](./docs/ai-providers.md) for detailed setup instructions for each provider.
|
||||
|
||||
4. Run the development server:
|
||||
|
||||
|
||||
10
README_CN.md
10
README_CN.md
@@ -81,7 +81,7 @@ https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
|
||||
## 多提供商支持
|
||||
|
||||
- AWS Bedrock(默认)
|
||||
- OpenAI / OpenAI兼容API(通过 `OPENAI_BASE_URL`)
|
||||
- OpenAI
|
||||
- Anthropic
|
||||
- Google AI
|
||||
- Azure OpenAI
|
||||
@@ -89,6 +89,12 @@ https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
|
||||
- OpenRouter
|
||||
- DeepSeek
|
||||
|
||||
除AWS Bedrock和OpenRouter外,所有提供商都支持自定义端点。
|
||||
|
||||
📖 **[详细的提供商配置指南](./docs/ai-providers.md)** - 查看各提供商的设置说明。
|
||||
|
||||
**模型要求**:此任务需要强大的模型能力,因为它涉及生成具有严格格式约束的长文本(draw.io XML)。推荐使用Claude Sonnet 4.5、GPT-4o、Gemini 2.0和DeepSeek V3/R1。
|
||||
|
||||
注意:`claude-sonnet-4-5` 已在带有AWS标志的draw.io图表上进行训练,因此如果您想创建AWS架构图,这是最佳选择。
|
||||
|
||||
## 快速开始
|
||||
@@ -144,7 +150,7 @@ cp env.example .env.local
|
||||
- 将 `AI_MODEL` 设置为您要使用的特定模型
|
||||
- 添加您的提供商所需的API密钥
|
||||
|
||||
请参阅上面的[多提供商支持](#多提供商支持)部分了解特定提供商的配置示例。
|
||||
详细设置说明请参阅[提供商配置指南](./docs/ai-providers.md)。
|
||||
|
||||
4. 运行开发服务器:
|
||||
|
||||
|
||||
10
README_JA.md
10
README_JA.md
@@ -81,7 +81,7 @@ https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
|
||||
## マルチプロバイダーサポート
|
||||
|
||||
- AWS Bedrock(デフォルト)
|
||||
- OpenAI / OpenAI互換API(`OPENAI_BASE_URL`経由)
|
||||
- OpenAI
|
||||
- Anthropic
|
||||
- Google AI
|
||||
- Azure OpenAI
|
||||
@@ -89,6 +89,12 @@ https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
|
||||
- OpenRouter
|
||||
- DeepSeek
|
||||
|
||||
AWS BedrockとOpenRouter以外のすべてのプロバイダーはカスタムエンドポイントをサポートしています。
|
||||
|
||||
📖 **[詳細なプロバイダー設定ガイド](./docs/ai-providers.md)** - 各プロバイダーの設定手順をご覧ください。
|
||||
|
||||
**モデル要件**:このタスクは厳密なフォーマット制約(draw.io XML)を持つ長文テキスト生成を伴うため、強力なモデル機能が必要です。Claude Sonnet 4.5、GPT-4o、Gemini 2.0、DeepSeek V3/R1を推奨します。
|
||||
|
||||
注:`claude-sonnet-4-5`はAWSロゴ付きのdraw.ioダイアグラムで学習されているため、AWSアーキテクチャダイアグラムを作成したい場合は最適な選択です。
|
||||
|
||||
## はじめに
|
||||
@@ -144,7 +150,7 @@ cp env.example .env.local
|
||||
- `AI_MODEL`を使用する特定のモデルに設定
|
||||
- プロバイダーに必要なAPIキーを追加
|
||||
|
||||
プロバイダー固有の設定例については、上記の[マルチプロバイダーサポート](#マルチプロバイダーサポート)セクションを参照してください。
|
||||
詳細な設定手順については[プロバイダー設定ガイド](./docs/ai-providers.md)を参照してください。
|
||||
|
||||
4. 開発サーバーを起動:
|
||||
|
||||
|
||||
141
docs/ai-providers.md
Normal file
141
docs/ai-providers.md
Normal file
@@ -0,0 +1,141 @@
|
||||
# AI Provider Configuration
|
||||
|
||||
This guide explains how to configure different AI model providers for next-ai-draw-io.
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. Copy `.env.example` to `.env.local`
|
||||
2. Set your API key for your chosen provider
|
||||
3. Set `AI_MODEL` to your desired model
|
||||
4. Run `npm run dev`
|
||||
|
||||
## Supported Providers
|
||||
|
||||
### Google Gemini
|
||||
|
||||
```bash
|
||||
GOOGLE_GENERATIVE_AI_API_KEY=your_api_key
|
||||
AI_MODEL=gemini-2.0-flash
|
||||
```
|
||||
|
||||
Optional custom endpoint:
|
||||
|
||||
```bash
|
||||
GOOGLE_BASE_URL=https://your-custom-endpoint
|
||||
```
|
||||
|
||||
### OpenAI
|
||||
|
||||
```bash
|
||||
OPENAI_API_KEY=your_api_key
|
||||
AI_MODEL=gpt-4o
|
||||
```
|
||||
|
||||
Optional custom endpoint (for OpenAI-compatible services):
|
||||
|
||||
```bash
|
||||
OPENAI_BASE_URL=https://your-custom-endpoint/v1
|
||||
```
|
||||
|
||||
### Anthropic
|
||||
|
||||
```bash
|
||||
ANTHROPIC_API_KEY=your_api_key
|
||||
AI_MODEL=claude-sonnet-4-5-20250514
|
||||
```
|
||||
|
||||
Optional custom endpoint:
|
||||
|
||||
```bash
|
||||
ANTHROPIC_BASE_URL=https://your-custom-endpoint
|
||||
```
|
||||
|
||||
### DeepSeek
|
||||
|
||||
```bash
|
||||
DEEPSEEK_API_KEY=your_api_key
|
||||
AI_MODEL=deepseek-chat
|
||||
```
|
||||
|
||||
Optional custom endpoint:
|
||||
|
||||
```bash
|
||||
DEEPSEEK_BASE_URL=https://your-custom-endpoint
|
||||
```
|
||||
|
||||
### Azure OpenAI
|
||||
|
||||
```bash
|
||||
AZURE_API_KEY=your_api_key
|
||||
AI_MODEL=your-deployment-name
|
||||
```
|
||||
|
||||
Optional custom endpoint:
|
||||
|
||||
```bash
|
||||
AZURE_BASE_URL=https://your-resource.openai.azure.com
|
||||
```
|
||||
|
||||
### AWS Bedrock
|
||||
|
||||
```bash
|
||||
AWS_REGION=us-west-2
|
||||
AWS_ACCESS_KEY_ID=your_access_key_id
|
||||
AWS_SECRET_ACCESS_KEY=your_secret_access_key
|
||||
AI_MODEL=anthropic.claude-sonnet-4-5-20250514-v1:0
|
||||
```
|
||||
|
||||
Note: On AWS (Amplify, Lambda, EC2 with IAM role), credentials are automatically obtained from the IAM role.
|
||||
|
||||
### OpenRouter
|
||||
|
||||
```bash
|
||||
OPENROUTER_API_KEY=your_api_key
|
||||
AI_MODEL=anthropic/claude-sonnet-4
|
||||
```
|
||||
|
||||
Optional custom endpoint:
|
||||
|
||||
```bash
|
||||
OPENROUTER_BASE_URL=https://your-custom-endpoint
|
||||
```
|
||||
|
||||
### Ollama (Local)
|
||||
|
||||
```bash
|
||||
AI_PROVIDER=ollama
|
||||
AI_MODEL=llama3.2
|
||||
```
|
||||
|
||||
Optional custom URL:
|
||||
|
||||
```bash
|
||||
OLLAMA_BASE_URL=http://localhost:11434
|
||||
```
|
||||
|
||||
## Auto-Detection
|
||||
|
||||
If you only configure **one** provider's API key, the system will automatically detect and use that provider. No need to set `AI_PROVIDER`.
|
||||
|
||||
If you configure **multiple** API keys, you must explicitly set `AI_PROVIDER`:
|
||||
|
||||
```bash
|
||||
AI_PROVIDER=google # or: openai, anthropic, deepseek, azure, bedrock, openrouter, ollama
|
||||
```
|
||||
|
||||
## Model Capability Requirements
|
||||
|
||||
This task requires exceptionally strong model capabilities, as it involves generating long-form text with strict formatting constraints (draw.io XML).
|
||||
|
||||
**Recommended models**:
|
||||
|
||||
- Claude Sonnet 4.5 / Opus 4.5
|
||||
|
||||
**Note on Ollama**: While Ollama is supported as a provider, it's generally not practical for this use case unless you're running high-capability models like DeepSeek R1 or Qwen3-235B locally.
|
||||
|
||||
## Recommendations
|
||||
|
||||
- **Best experience**: Use models with vision support (GPT-4o, Claude, Gemini) for image-to-diagram features
|
||||
- **Budget-friendly**: DeepSeek offers competitive pricing
|
||||
- **Privacy**: Use Ollama for fully local, offline operation (requires powerful hardware)
|
||||
- **Flexibility**: OpenRouter provides access to many models through a single API
|
||||
Reference in New Issue
Block a user