* Fix: remove hardcoded temperature parameter to support reasoning models * feat: make temperature configurable via AI_TEMPERATURE env var - Instead of removing temperature entirely, make it optional via env var - Set AI_TEMPERATURE=0 for deterministic output (recommended for diagrams) - Leave unset for models that don't support temperature (e.g., GPT-5.1 reasoning) * docs: add AI_TEMPERATURE env var documentation - Update env.example with AI_TEMPERATURE option - Update README.md configuration section - Add Temperature Setting section in ai-providers.md * docs: add TEMPERATURE env var documentation - Update env.example with TEMPERATURE option - Update README.md, README_CN.md, README_JA.md configuration sections - Add Temperature Setting section in ai-providers.md - Update route.ts to use TEMPERATURE env var --------- Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
3.3 KiB
AI Provider Configuration
This guide explains how to configure different AI model providers for next-ai-draw-io.
Quick Start
- Copy
.env.exampleto.env.local - Set your API key for your chosen provider
- Set
AI_MODELto your desired model - Run
npm run dev
Supported Providers
Google Gemini
GOOGLE_GENERATIVE_AI_API_KEY=your_api_key
AI_MODEL=gemini-2.0-flash
Optional custom endpoint:
GOOGLE_BASE_URL=https://your-custom-endpoint
OpenAI
OPENAI_API_KEY=your_api_key
AI_MODEL=gpt-4o
Optional custom endpoint (for OpenAI-compatible services):
OPENAI_BASE_URL=https://your-custom-endpoint/v1
Anthropic
ANTHROPIC_API_KEY=your_api_key
AI_MODEL=claude-sonnet-4-5-20250514
Optional custom endpoint:
ANTHROPIC_BASE_URL=https://your-custom-endpoint
DeepSeek
DEEPSEEK_API_KEY=your_api_key
AI_MODEL=deepseek-chat
Optional custom endpoint:
DEEPSEEK_BASE_URL=https://your-custom-endpoint
Azure OpenAI
AZURE_API_KEY=your_api_key
AI_MODEL=your-deployment-name
Optional custom endpoint:
AZURE_BASE_URL=https://your-resource.openai.azure.com
AWS Bedrock
AWS_REGION=us-west-2
AWS_ACCESS_KEY_ID=your_access_key_id
AWS_SECRET_ACCESS_KEY=your_secret_access_key
AI_MODEL=anthropic.claude-sonnet-4-5-20250514-v1:0
Note: On AWS (Amplify, Lambda, EC2 with IAM role), credentials are automatically obtained from the IAM role.
OpenRouter
OPENROUTER_API_KEY=your_api_key
AI_MODEL=anthropic/claude-sonnet-4
Optional custom endpoint:
OPENROUTER_BASE_URL=https://your-custom-endpoint
Ollama (Local)
AI_PROVIDER=ollama
AI_MODEL=llama3.2
Optional custom URL:
OLLAMA_BASE_URL=http://localhost:11434
Auto-Detection
If you only configure one provider's API key, the system will automatically detect and use that provider. No need to set AI_PROVIDER.
If you configure multiple API keys, you must explicitly set AI_PROVIDER:
AI_PROVIDER=google # or: openai, anthropic, deepseek, azure, bedrock, openrouter, ollama
Model Capability Requirements
This task requires exceptionally strong model capabilities, as it involves generating long-form text with strict formatting constraints (draw.io XML).
Recommended models:
- Claude Sonnet 4.5 / Opus 4.5
Note on Ollama: While Ollama is supported as a provider, it's generally not practical for this use case unless you're running high-capability models like DeepSeek R1 or Qwen3-235B locally.
Temperature Setting
You can optionally configure the temperature via environment variable:
TEMPERATURE=0 # More deterministic output (recommended for diagrams)
Important: Leave TEMPERATURE unset for models that don't support temperature settings, such as:
- GPT-5.1 and other reasoning models
- Some specialized models
When unset, the model uses its default behavior.
Recommendations
- Best experience: Use models with vision support (GPT-4o, Claude, Gemini) for image-to-diagram features
- Budget-friendly: DeepSeek offers competitive pricing
- Privacy: Use Ollama for fully local, offline operation (requires powerful hardware)
- Flexibility: OpenRouter provides access to many models through a single API