diff --git a/README.md b/README.md index dc95e78..d21b7b7 100644 --- a/README.md +++ b/README.md @@ -149,6 +149,7 @@ Edit `.env.local` and configure your chosen provider: - Set `AI_PROVIDER` to your chosen provider (bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek) - Set `AI_MODEL` to the specific model you want to use - Add the required API keys for your provider +- `TEMPERATURE`: Optional temperature setting (e.g., `0` for deterministic output). Leave unset for models that don't support it (e.g., reasoning models). - `ACCESS_CODE_LIST`: Optional access password(s), can be comma-separated for multiple passwords. > Warning: If you do not set `ACCESS_CODE_LIST`, anyone can access your deployed site directly, which may lead to rapid depletion of your token. It is recommended to set this option. diff --git a/README_CN.md b/README_CN.md index 5ba3033..d3c7091 100644 --- a/README_CN.md +++ b/README_CN.md @@ -149,6 +149,7 @@ cp env.example .env.local - 将 `AI_PROVIDER` 设置为您选择的提供商(bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek) - 将 `AI_MODEL` 设置为您要使用的特定模型 - 添加您的提供商所需的API密钥 +- `TEMPERATURE`:可选的温度设置(例如 `0` 表示确定性输出)。对于不支持此参数的模型(如推理模型),请不要设置。 - `ACCESS_CODE_LIST` 访问密码,可选,可以使用逗号隔开多个密码。 > 警告:如果不填写 `ACCESS_CODE_LIST`,则任何人都可以直接使用你部署后的网站,可能会导致你的 token 被急速消耗完毕,建议填写此选项。 diff --git a/README_JA.md b/README_JA.md index a80dd6f..fcb82a3 100644 --- a/README_JA.md +++ b/README_JA.md @@ -149,6 +149,7 @@ cp env.example .env.local - `AI_PROVIDER`を選択したプロバイダーに設定(bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek) - `AI_MODEL`を使用する特定のモデルに設定 - プロバイダーに必要なAPIキーを追加 +- `TEMPERATURE`:オプションの温度設定(例:`0`で決定論的な出力)。温度をサポートしないモデル(推論モデルなど)では設定しないでください。 - `ACCESS_CODE_LIST` アクセスパスワード(オプション)。カンマ区切りで複数のパスワードを指定できます。 > 警告:`ACCESS_CODE_LIST`を設定しない場合、誰でもデプロイされたサイトに直接アクセスできるため、トークンが急速に消費される可能性があります。このオプションを設定することをお勧めします。 diff --git a/app/api/chat/route.ts b/app/api/chat/route.ts index 9fa01d5..42be311 100644 --- a/app/api/chat/route.ts +++ b/app/api/chat/route.ts @@ -418,7 +418,9 @@ IMPORTANT: Keep edits concise: }), }, }, - temperature: 0, + ...(process.env.TEMPERATURE !== undefined && { + temperature: parseFloat(process.env.TEMPERATURE), + }), }) return result.toUIMessageStreamResponse() diff --git a/docs/ai-providers.md b/docs/ai-providers.md index aabae67..0b2157a 100644 --- a/docs/ai-providers.md +++ b/docs/ai-providers.md @@ -133,6 +133,20 @@ This task requires exceptionally strong model capabilities, as it involves gener **Note on Ollama**: While Ollama is supported as a provider, it's generally not practical for this use case unless you're running high-capability models like DeepSeek R1 or Qwen3-235B locally. +## Temperature Setting + +You can optionally configure the temperature via environment variable: + +```bash +TEMPERATURE=0 # More deterministic output (recommended for diagrams) +``` + +**Important**: Leave `TEMPERATURE` unset for models that don't support temperature settings, such as: +- GPT-5.1 and other reasoning models +- Some specialized models + +When unset, the model uses its default behavior. + ## Recommendations - **Best experience**: Use models with vision support (GPT-4o, Claude, Gemini) for image-to-diagram features diff --git a/env.example b/env.example index 7514843..c782158 100644 --- a/env.example +++ b/env.example @@ -48,5 +48,10 @@ AI_MODEL=global.anthropic.claude-sonnet-4-5-20250929-v1:0 # LANGFUSE_SECRET_KEY=sk-lf-... # LANGFUSE_BASEURL=https://cloud.langfuse.com # EU region, use https://us.cloud.langfuse.com for US +# Temperature (Optional) +# Controls randomness in AI responses. Lower = more deterministic. +# Leave unset for models that don't support temperature (e.g., GPT-5.1 reasoning models) +# TEMPERATURE=0 + # Access Control (Optional) # ACCESS_CODE_LIST=your-secret-code,another-code