chore: update readme

This commit is contained in:
dayuan.jiang
2025-11-15 15:32:21 +09:00
parent ba219730ac
commit 065e8f0d62

View File

@@ -14,7 +14,6 @@ Demo site: [https://next-ai-draw-io.vercel.app/](https://next-ai-draw-io.vercel.
- **Interactive Chat Interface**: Communicate with AI to refine your diagrams in real-time - **Interactive Chat Interface**: Communicate with AI to refine your diagrams in real-time
- **Smart Editing**: Modify existing diagrams using simple text prompts - **Smart Editing**: Modify existing diagrams using simple text prompts
- **Targeted XML Editing**: AI can now make precise edits to specific parts of diagrams without regenerating the entire XML, making updates faster and more efficient - **Targeted XML Editing**: AI can now make precise edits to specific parts of diagrams without regenerating the entire XML, making updates faster and more efficient
- **Improved XML Handling**: Automatic formatting of single-line XML for better compatibility and reliability
## How It Works ## How It Works
@@ -32,79 +31,16 @@ This application supports multiple AI providers, making it easy to deploy with y
### Supported Providers ### Supported Providers
| Provider | Status | Best For | | Provider | Status | Best For |
|----------|--------|----------| | ---------------- | ------------ | ------------------------------------------- |
| **AWS Bedrock** | ✅ Default | Claude models via AWS infrastructure | | **AWS Bedrock** | ✅ Default | Claude models via AWS infrastructure |
| **OpenAI** | ✅ Supported | GPT-4, GPT-5, and reasoning models | | **OpenAI** | ✅ Supported | GPT-4, GPT-5, and reasoning models |
| **Anthropic** | ✅ Supported | Direct access to Claude models | | **Anthropic** | ✅ Supported | Direct access to Claude models |
| **Google AI** | ✅ Supported | Gemini models with multi-modal capabilities | | **Google AI** | ✅ Supported | Gemini models with multi-modal capabilities |
| **Azure OpenAI** | ✅ Supported | Enterprise OpenAI deployments | | **Azure OpenAI** | ✅ Supported | Enterprise OpenAI deployments |
| **Ollama** | ✅ Supported | Local/self-hosted open source models | | **Ollama** | ✅ Supported | Local/self-hosted open source models |
### Quick Setup by Provider Note that `claude-sonnet-4-5` has trained on draw.io diagrams with AWS logos, so if you want to create AWS architecture diagrams, this is the best choice.
#### AWS Bedrock (Default)
```bash
AI_PROVIDER=bedrock
AI_MODEL=global.anthropic.claude-sonnet-4-5-20250929-v1:0
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
```
#### OpenAI
```bash
AI_PROVIDER=openai
AI_MODEL=gpt-4o
OPENAI_API_KEY=sk-...
```
#### Anthropic
```bash
AI_PROVIDER=anthropic
AI_MODEL=claude-sonnet-4-5
ANTHROPIC_API_KEY=sk-ant-...
```
#### Google Generative AI
```bash
AI_PROVIDER=google
AI_MODEL=gemini-2.5-flash
GOOGLE_GENERATIVE_AI_API_KEY=...
```
#### Azure OpenAI
```bash
AI_PROVIDER=azure
AI_MODEL=your-deployment-name
AZURE_RESOURCE_NAME=your-resource
AZURE_API_KEY=...
```
#### Ollama (Local)
```bash
AI_PROVIDER=ollama
AI_MODEL=phi3
OLLAMA_BASE_URL=http://localhost:11434/api # Optional
```
Note: Install models locally first with `ollama pull <model-name>`
### Recommended Models
**Best Quality:**
- AWS Bedrock: `global.anthropic.claude-sonnet-4-5-20250929-v1:0`
- Anthropic: `claude-sonnet-4-5`
- OpenAI: `gpt-4o` or `gpt-5`
**Best Speed:**
- Google: `gemini-2.5-flash`
- OpenAI: `gpt-4o`
- Anthropic: `claude-haiku-4-5`
**Best Cost:**
- Ollama: Free (local models)
- Google: `gemini-1.5-flash-8b`
- OpenAI: `gpt-4o-mini`
## Getting Started ## Getting Started
@@ -130,13 +66,14 @@ yarn install
Create a `.env.local` file in the root directory: Create a `.env.local` file in the root directory:
```bash ```bash
cp .env.example .env.local cp env.example .env.local
``` ```
Edit `.env.local` and configure your chosen provider: Edit `.env.local` and configure your chosen provider:
- Set `AI_PROVIDER` to your chosen provider (bedrock, openai, anthropic, google, azure, ollama)
- Set `AI_MODEL` to the specific model you want to use - Set `AI_PROVIDER` to your chosen provider (bedrock, openai, anthropic, google, azure, ollama)
- Add the required API keys for your provider - Set `AI_MODEL` to the specific model you want to use
- Add the required API keys for your provider
See the [Multi-Provider Support](#multi-provider-support) section above for provider-specific configuration examples. See the [Multi-Provider Support](#multi-provider-support) section above for provider-specific configuration examples.
@@ -157,6 +94,8 @@ Check out the [Next.js deployment documentation](https://nextjs.org/docs/app/bui
Or you can deploy by this button. Or you can deploy by this button.
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FDayuanJiang%2Fnext-ai-draw-io) [![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FDayuanJiang%2Fnext-ai-draw-io)
Be sure to **set the environment variables** in the Vercel dashboard as you did in your local `.env.local` file.
## Project Structure ## Project Structure
``` ```