Some models (Kimi K2, DeepSeek, Qwen text models) don't support image/vision
input. The AI SDK silently drops unsupported image parts, causing confusing
responses where the model acts as if no image was uploaded.
Added supportsImageInput() function to detect unsupported models by name,
and return a 400 error with clear guidance when users try to upload images
to these models.
Closes#469
* feat: add doubao provider and ByteDance sponsorship
- Add doubao provider using DeepSeek SDK with Volcengine base URL
- Add ByteDance Doubao sponsorship acknowledgment to about pages
- Update all README files (EN/CN/JA) with K2-thinking model info
- Update ai-providers.md with doubao configuration
- Keep both gateway and doubao providers after merge
* style: auto-format with Biome
* feat: add doubao and sglang to provider config panel
* fix: add doubao and sglang to validate-model API and logo maps
* docs: update ByteDance sponsorship note in all README versions
* docs: add Doubao logo to sponsorship note
* fix: use raw GitHub URL for Doubao logo in READMEs
* fix: separate link and image in sponsorship note
* fix: use PNG instead of SVG for Doubao logo
* fix: use current branch for PNG URL (will update to main after merge)
* docs: reorganize Deployment section and update image URLs to main
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Third-party OpenAI-compatible proxies typically don't support the
/responses endpoint. Use .chat() for custom baseURLs while keeping
Responses API for official OpenAI to preserve reasoning model support.
Fixes#377
- Use Responses API instead of Chat Completions API for OpenAI
(.chat() -> default call) to support reasoning events
- Add o4 to reasoning model detection
- Change default reasoningSummary from 'detailed' to 'auto'
(not all models support 'detailed')
- Update types to match AI SDK: 'auto' | 'detailed'
* feat: add multi-provider model configuration
- Add model config dialog for managing multiple AI providers
- Support for OpenAI, Anthropic, Google, Azure, Bedrock, OpenRouter, DeepSeek, SiliconFlow, Ollama, and AI Gateway
- Add model selector dropdown in chat panel header
- Add API key validation endpoint
- Add custom model ID input with keyboard navigation
- Fix hover highlight in Command component
- Add suggested models for each provider including latest Claude 4.5 series
- Store configuration locally in browser
* feat: improve model config UI and move selector to chat input
- Move model selector from header to chat input (left of send button)
- Add per-model validation status (queued, running, valid, invalid)
- Filter model selector to only show verified models
- Add editable model IDs in config dialog
- Add custom model input field alongside suggested models dropdown
- Fix hover states on provider buttons and select triggers
- Update OpenAI suggested models with GPT-5 series
- Add alert-dialog component for delete confirmation
* refactor: revert shadcn component changes, apply hover fix at usage site
* feat: add AWS credentials support for Bedrock provider
- Add AWS Access Key ID, Secret Access Key, Region fields for Bedrock
- Show different credential fields based on provider type
- Update validation API to handle Bedrock with AWS credentials
- Add region selector with common AWS regions
* fix: reset Test button after validation completes
* fix: reset validation button to Test after success
* fix: complete bedrock support and UI/UX improvements
- Add bedrock to ALLOWED_CLIENT_PROVIDERS for client credentials
- Pass AWS credentials through full chain (headers → API → provider)
- Replace non-existent GPT-5 models with real ones (o1, o3-mini)
- Add accessibility: aria-labels, focus-visible rings, inline errors
- Add more AWS regions (Ohio, London, Paris, Mumbai, Seoul, São Paulo)
- Fix setTimeout cleanup with useRef on component unmount
- Fix TypeScript type consistency in getSelectedAIConfig fallback
* chore: remove unused code
- Remove unused setAccessCodeRequired state in chat-panel.tsx
- Remove unused getSelectedModel export in model-config.ts
* fix: UI/UX improvements for model configuration dialog
- Add gradient header styling with icon badge
- Change Configuration section icon from Key to Settings2
- Add duplicate model detection with warning banner and inline removal
- Filter out already-added models from suggestions dropdown
- Add type-to-confirm for deleting providers with 3+ models
- Enhance delete confirmation dialog with warning icon
- Improve model selector discoverability (show model name + chevron)
- Add truncation for long model names with title tooltip
- Remove AI provider settings from Settings dialog (now in Model Config)
- Extract ValidationButton into reusable component
* fix: prevent duplicate model IDs within same provider
- Block adding model if ID already exists in provider
- Block editing model ID to match existing model in provider
* fix: improve duplicate model ID notifications
- Add toast notification when trying to add duplicate model
- Allow free typing when editing model ID, validate on blur
- Show warning toast instead of blocking input
* fix: improve duplicate model validation UX in config dialog
- Add inline error display for duplicate model IDs
- Show red border on input when error exists
- Validate on blur with shake animation for edit errors
- Prevent saving empty model names
- Clear errors when user starts typing
- Simplify error styling (small red text, no heavy chips)
* feat: add support for custom AI Gateway base URL
- Add createGateway support with configurable baseURL
- Allow AI_GATEWAY_BASE_URL environment variable for:
* Local development with custom Gateway
* Self-hosted AI Gateway deployments
* Enterprise proxy configurations
- Maintain backward compatibility: defaults to Vercel Gateway when not set
- Update documentation with usage examples and configuration notes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: remove errant character in error message
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
* feat: add Vercel AI Gateway support
- Updated environment configuration to include AI_GATEWAY_API_KEY for unified access to multiple AI providers.
- Added gateway provider to the list of supported AI providers in the codebase.
- Enhanced documentation to explain the usage of Vercel AI Gateway and its model format.
This change simplifies authentication and allows users to switch between providers seamlessly.
* Update package
@ai-sdk/gateway to latest version 2.0.21
* fix: improve Azure provider auto-detection and validation (#223)
- Fix detectProvider() to only detect Azure when it has complete config
(both AZURE_API_KEY and AZURE_RESOURCE_NAME or AZURE_BASE_URL)
- Add validation in validateProviderCredentials() for Azure to provide
clear error messages when configuration is incomplete
- Update docs/ai-providers.md to clarify Azure requires resource name
* docs: add Azure reasoning options to documentation
Previously AZURE_RESOURCE_NAME was documented in env.example but not
actually used in the code. This caused Azure OpenAI configuration to fail
when users set AZURE_RESOURCE_NAME instead of AZURE_BASE_URL.
Changes:
- Read AZURE_RESOURCE_NAME from environment and pass to createAzure()
- resourceName constructs endpoint: https://{name}.openai.azure.com/openai/v1
- baseURL takes precedence over resourceName when both are set
- Updated env.example with clearer documentation
Fixes#208
- Add AI provider settings to config panel (provider, model, API key, base URL)
- Support 7 providers: OpenAI, Anthropic, Google, Azure, OpenRouter, DeepSeek, SiliconFlow
- Client API keys stored in localStorage, never stored on server
- Client settings override server env vars when provided
- Skip server credential validation when client provides API key
- Bypass usage limits (request/token/TPM) when using own API key
- Add /api/config endpoint for fetching usage limits
- Add privacy notices to settings dialog, about pages, and quota toast
- Add clear settings button to reset saved API keys
- Update README files (EN/CN/JA) with BYOK documentation
Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
* feat: support minimax model with XML wrapping fix
- Add wrapWithMxFile utility to properly wrap XML for draw.io
- Fix 'Not a diagram file' error when model generates raw <root> XML
- Add supportsPromptCaching check for conditional caching
- Only enable Bedrock prompt caching for Claude models
* docs: update model mention to minimax-m2 across About pages and READMEs
- Update tooltip in chat-panel.tsx to mention minimax-m2 model change
- Update English, Chinese, and Japanese About pages with model change info
- Update English, Chinese, and Japanese READMEs with demo site model note
---------
Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
- Add Biome as formatter and linter (replaces Prettier)
- Configure Husky + lint-staged for pre-commit hooks
- Add VS Code settings for format on save
- Ignore components/ui/ (shadcn generated code)
- Remove semicolons, use 4-space indent
- Reformat all files to new style
- Remove default bedrock provider requirement
- Auto-detect provider when only one API key is configured
- Show helpful error when no keys or multiple keys without AI_PROVIDER
- Fixes#73
- Extract system prompts to dedicated lib/system-prompts.ts module
- Add extended system prompt (~4000 tokens) for models with higher cache minimums (Opus 4.5, Haiku 4.5)
- Clean up debug logs while preserving informational and cache-related logs
- Improve code formatting and organization in chat route
- Install @ai-sdk/deepseek package
- Add DeepSeek provider support to lib/ai-providers.ts
- Add DeepSeek configuration to env.example
- Update README.md with DeepSeek in provider list
- Support both default and custom base URL for DeepSeek
* fix: add custom anthropic baseURL
* feat: add baseURL support for all AI providers
- Add GOOGLE_BASE_URL for Google Generative AI
- Add AZURE_BASE_URL for Azure OpenAI
- Add OLLAMA_BASE_URL support (was documented but not implemented)
- Add OPENROUTER_BASE_URL for OpenRouter
- Fix missing semicolon in Anthropic case
- Update env.example with new environment variables
Closes#20
---------
Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
* fix: correct anthropic beta header config for fine-grained tool streaming
- Use bedrock.anthropicBeta for Bedrock provider (not additionalModelRequestFields)
- Use top-level headers for direct Anthropic API
- Update @ai-sdk/amazon-bedrock to 3.0.62
- Add headers support to ModelConfig interface
* fix: update @ai-sdk/amazon-bedrock to 3.0.62 for tool streaming support
- Add OpenRouter provider support with @openrouter/ai-sdk-provider
- Fix input not disabling during 'submitted' state for fast providers
- Apply disable logic to all interactive elements (textarea, buttons, handlers)
- Clean up env.example by removing model examples and separator blocks
- Upgrade zod to v4.1.12 for compatibility with ollama-ai-provider-v2
- Add debug logging for status changes in chat components