mirror of
https://github.com/DayuanJiang/next-ai-draw-io.git
synced 2026-01-02 14:22:28 +08:00
* feat: add support for custom AI Gateway base URL - Add createGateway support with configurable baseURL - Allow AI_GATEWAY_BASE_URL environment variable for: * Local development with custom Gateway * Self-hosted AI Gateway deployments * Enterprise proxy configurations - Maintain backward compatibility: defaults to Vercel Gateway when not set - Update documentation with usage examples and configuration notes 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: remove errant character in error message --------- Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com> Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
215 lines
5.2 KiB
Markdown
215 lines
5.2 KiB
Markdown
# AI Provider Configuration
|
|
|
|
This guide explains how to configure different AI model providers for next-ai-draw-io.
|
|
|
|
## Quick Start
|
|
|
|
1. Copy `.env.example` to `.env.local`
|
|
2. Set your API key for your chosen provider
|
|
3. Set `AI_MODEL` to your desired model
|
|
4. Run `npm run dev`
|
|
|
|
## Supported Providers
|
|
|
|
### Google Gemini
|
|
|
|
```bash
|
|
GOOGLE_GENERATIVE_AI_API_KEY=your_api_key
|
|
AI_MODEL=gemini-2.0-flash
|
|
```
|
|
|
|
Optional custom endpoint:
|
|
|
|
```bash
|
|
GOOGLE_BASE_URL=https://your-custom-endpoint
|
|
```
|
|
|
|
### OpenAI
|
|
|
|
```bash
|
|
OPENAI_API_KEY=your_api_key
|
|
AI_MODEL=gpt-4o
|
|
```
|
|
|
|
Optional custom endpoint (for OpenAI-compatible services):
|
|
|
|
```bash
|
|
OPENAI_BASE_URL=https://your-custom-endpoint/v1
|
|
```
|
|
|
|
### Anthropic
|
|
|
|
```bash
|
|
ANTHROPIC_API_KEY=your_api_key
|
|
AI_MODEL=claude-sonnet-4-5-20250514
|
|
```
|
|
|
|
Optional custom endpoint:
|
|
|
|
```bash
|
|
ANTHROPIC_BASE_URL=https://your-custom-endpoint
|
|
```
|
|
|
|
### DeepSeek
|
|
|
|
```bash
|
|
DEEPSEEK_API_KEY=your_api_key
|
|
AI_MODEL=deepseek-chat
|
|
```
|
|
|
|
Optional custom endpoint:
|
|
|
|
```bash
|
|
DEEPSEEK_BASE_URL=https://your-custom-endpoint
|
|
```
|
|
|
|
### SiliconFlow (OpenAI-compatible)
|
|
|
|
```bash
|
|
SILICONFLOW_API_KEY=your_api_key
|
|
AI_MODEL=deepseek-ai/DeepSeek-V3 # example; use any SiliconFlow model id
|
|
```
|
|
|
|
Optional custom endpoint (defaults to the recommended domain):
|
|
|
|
```bash
|
|
SILICONFLOW_BASE_URL=https://api.siliconflow.com/v1 # or https://api.siliconflow.cn/v1
|
|
```
|
|
|
|
### Azure OpenAI
|
|
|
|
```bash
|
|
AZURE_API_KEY=your_api_key
|
|
AZURE_RESOURCE_NAME=your-resource-name # Required: your Azure resource name
|
|
AI_MODEL=your-deployment-name
|
|
```
|
|
|
|
Or use a custom endpoint instead of resource name:
|
|
|
|
```bash
|
|
AZURE_API_KEY=your_api_key
|
|
AZURE_BASE_URL=https://your-resource.openai.azure.com # Alternative to AZURE_RESOURCE_NAME
|
|
AI_MODEL=your-deployment-name
|
|
```
|
|
|
|
Optional reasoning configuration:
|
|
|
|
```bash
|
|
AZURE_REASONING_EFFORT=low # Optional: low, medium, high
|
|
AZURE_REASONING_SUMMARY=detailed # Optional: none, brief, detailed
|
|
```
|
|
|
|
### AWS Bedrock
|
|
|
|
```bash
|
|
AWS_REGION=us-west-2
|
|
AWS_ACCESS_KEY_ID=your_access_key_id
|
|
AWS_SECRET_ACCESS_KEY=your_secret_access_key
|
|
AI_MODEL=anthropic.claude-sonnet-4-5-20250514-v1:0
|
|
```
|
|
|
|
Note: On AWS (Lambda, EC2 with IAM role), credentials are automatically obtained from the IAM role.
|
|
|
|
### OpenRouter
|
|
|
|
```bash
|
|
OPENROUTER_API_KEY=your_api_key
|
|
AI_MODEL=anthropic/claude-sonnet-4
|
|
```
|
|
|
|
Optional custom endpoint:
|
|
|
|
```bash
|
|
OPENROUTER_BASE_URL=https://your-custom-endpoint
|
|
```
|
|
|
|
### Ollama (Local)
|
|
|
|
```bash
|
|
AI_PROVIDER=ollama
|
|
AI_MODEL=llama3.2
|
|
```
|
|
|
|
Optional custom URL:
|
|
|
|
```bash
|
|
OLLAMA_BASE_URL=http://localhost:11434
|
|
```
|
|
|
|
### Vercel AI Gateway
|
|
|
|
Vercel AI Gateway provides unified access to multiple AI providers through a single API key. This simplifies authentication and allows you to switch between providers without managing multiple API keys.
|
|
|
|
**Basic Usage (Vercel-hosted Gateway):**
|
|
|
|
```bash
|
|
AI_GATEWAY_API_KEY=your_gateway_api_key
|
|
AI_MODEL=openai/gpt-4o
|
|
```
|
|
|
|
**Custom Gateway URL (for local development or self-hosted Gateway):**
|
|
|
|
```bash
|
|
AI_GATEWAY_API_KEY=your_custom_api_key
|
|
AI_GATEWAY_BASE_URL=https://your-custom-gateway.com/v1/ai
|
|
AI_MODEL=openai/gpt-4o
|
|
```
|
|
|
|
Model format uses `provider/model` syntax:
|
|
|
|
- `openai/gpt-4o` - OpenAI GPT-4o
|
|
- `anthropic/claude-sonnet-4-5` - Anthropic Claude Sonnet 4.5
|
|
- `google/gemini-2.0-flash` - Google Gemini 2.0 Flash
|
|
|
|
**Configuration notes:**
|
|
|
|
- If `AI_GATEWAY_BASE_URL` is not set, the default Vercel Gateway URL (`https://ai-gateway.vercel.sh/v1/ai`) is used
|
|
- Custom base URL is useful for:
|
|
- Local development with a custom Gateway instance
|
|
- Self-hosted AI Gateway deployments
|
|
- Enterprise proxy configurations
|
|
- When using a custom base URL, you must also provide `AI_GATEWAY_API_KEY`
|
|
|
|
Get your API key from the [Vercel AI Gateway dashboard](https://vercel.com/ai-gateway).
|
|
|
|
## Auto-Detection
|
|
|
|
If you only configure **one** provider's API key, the system will automatically detect and use that provider. No need to set `AI_PROVIDER`.
|
|
|
|
If you configure **multiple** API keys, you must explicitly set `AI_PROVIDER`:
|
|
|
|
```bash
|
|
AI_PROVIDER=google # or: openai, anthropic, deepseek, siliconflow, azure, bedrock, openrouter, ollama, gateway
|
|
```
|
|
|
|
## Model Capability Requirements
|
|
|
|
This task requires exceptionally strong model capabilities, as it involves generating long-form text with strict formatting constraints (draw.io XML).
|
|
|
|
**Recommended models**:
|
|
|
|
- Claude Sonnet 4.5 / Opus 4.5
|
|
|
|
**Note on Ollama**: While Ollama is supported as a provider, it's generally not practical for this use case unless you're running high-capability models like DeepSeek R1 or Qwen3-235B locally.
|
|
|
|
## Temperature Setting
|
|
|
|
You can optionally configure the temperature via environment variable:
|
|
|
|
```bash
|
|
TEMPERATURE=0 # More deterministic output (recommended for diagrams)
|
|
```
|
|
|
|
**Important**: Leave `TEMPERATURE` unset for models that don't support temperature settings, such as:
|
|
- GPT-5.1 and other reasoning models
|
|
- Some specialized models
|
|
|
|
When unset, the model uses its default behavior.
|
|
|
|
## Recommendations
|
|
|
|
- **Best experience**: Use models with vision support (GPT-4o, Claude, Gemini) for image-to-diagram features
|
|
- **Budget-friendly**: DeepSeek offers competitive pricing
|
|
- **Privacy**: Use Ollama for fully local, offline operation (requires powerful hardware)
|
|
- **Flexibility**: OpenRouter provides access to many models through a single API
|