Compare commits

..

11 Commits

Author SHA1 Message Date
Dayuan Jiang
085d656a3c chore: bump version to 0.4.10 (#540) 2026-01-09 10:41:35 +09:00
Dayuan Jiang
d22474b541 feat: add proxy settings to Settings dialog (Desktop only) (#537)
* feat: add proxy settings to Settings dialog (Desktop only)

Fixes #535 - Desktop app now respects HTTP/HTTPS proxy configuration.

- Add proxy-manager.ts to handle proxy config storage (JSON file in userData)
- Load proxy settings on app startup before Next.js server starts
- Add IPC handlers for get-proxy and set-proxy
- Add proxy settings UI in Settings dialog (Electron only)
- Add translations for en/zh/ja

* fix: improve proxy settings reliability and simplify UI

- Fix server restart race condition (wait for process exit before starting new server)
- Add URL validation (must include http:// or https:// prefix)
- Enable Node.js built-in proxy support (NODE_USE_ENV_PROXY=1)
- Remove "Proxy Exceptions" field (unnecessary for this app)
- Add debug logging for proxy env vars

* refactor: remove duplicate ProxyConfig interface, import from electron.d.ts
2026-01-09 09:26:19 +09:00
Dayuan Jiang
083c2a4142 fix: specify artifact-configuration-slug for SignPath (#533) 2026-01-08 12:52:24 +09:00
Dayuan Jiang
c4b1ec8d28 feat: add SignPath code signing for Windows builds (#531)
- Split workflow into mac/linux and windows jobs
- Add dist:win:build script with --publish never
- Integrate SignPath signing for Windows executables
- Sign both NSIS installer and portable EXE files
2026-01-08 10:51:12 +09:00
Dayuan Jiang
6ad4a9b303 chore(mcp-server): fix author and repository to DayuanJiang (#529) 2026-01-07 12:30:14 +09:00
broBinChen
dcf222114c fix: add missing nanoid dependency (#528) 2026-01-07 12:06:05 +09:00
Biki Kalita
4ece615548 fix - not clearing the loading state (#524) 2026-01-07 08:30:12 +09:00
yrk111222
54fd48506d Feat/add modelscope support (#521)
* add ModelScope API support

* update some documentation

* modify some details
2026-01-06 19:41:25 +09:00
zhoujie0531
ffcb241383 feat: mod readme (#522)
Co-authored-by: zoejiezhou <zoejiezhou@tencent.com>
2026-01-06 17:57:40 +09:00
Dayuan Jiang
79491e2143 chore: remove usage limits from about pages (#520) 2026-01-06 10:46:13 +09:00
Biki Kalita
6326f9dec6 🔗 Add URL Content Extraction Feature (#514)
* feat: add URL content extraction for AI diagram generation

* Changes made as recommended by Claude:

1. Added a request timeout to prevent server resources from being tied up (route.ts)
2. Implemented runtime validation for the API response shape (url-utils.ts)
3. Removed hardcoded English error messages and replaced them with localized strings (url-input-dialog.tsx)
4. Fixed the incorrect i18n namespace (changed from pdf.* to url.*) (url-input-dialog.tsx and en/ja/zh.json)

* chore: restore package.json and package-lock.json

* fix: use i18n strings for URL dialog error messages

---------

Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2026-01-06 00:23:50 +09:00
33 changed files with 564 additions and 177 deletions

View File

@@ -11,7 +11,8 @@ on:
required: false
jobs:
build:
# Mac and Linux: Build and publish directly (no signing needed)
build-mac-linux:
permissions:
contents: write
strategy:
@@ -20,13 +21,9 @@ jobs:
include:
- os: macos-latest
platform: mac
- os: windows-latest
platform: win
- os: ubuntu-latest
platform: linux
runs-on: ${{ matrix.os }}
steps:
- name: Checkout code
uses: actions/checkout@v6
@@ -40,7 +37,58 @@ jobs:
- name: Install dependencies
run: npm install
- name: Build and publish Electron app
- name: Build and publish
run: npm run dist:${{ matrix.platform }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Windows: Build, sign with SignPath, then publish
build-windows:
permissions:
contents: write
runs-on: windows-latest
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Setup Node.js
uses: actions/setup-node@v6
with:
node-version: 24
cache: "npm"
- name: Install dependencies
run: npm install
# Build WITHOUT publishing
- name: Build Windows app
run: npm run dist:win:build
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload unsigned artifacts for signing
uses: actions/upload-artifact@v4
id: upload-unsigned
with:
name: windows-unsigned
path: release/*.exe
retention-days: 1
- name: Sign with SignPath
uses: signpath/github-action-submit-signing-request@v2
with:
api-token: ${{ secrets.SIGNPATH_API_TOKEN }}
organization-id: '880a211d-2cd3-4e7b-8d04-3d1f8eb39df5'
project-slug: 'next-ai-draw-io'
signing-policy-slug: 'test-signing'
artifact-configuration-slug: 'windows-exe'
github-artifact-id: ${{ steps.upload-unsigned.outputs.artifact-id }}
wait-for-completion: true
output-artifact-directory: release-signed
- name: Upload signed artifacts to release
uses: softprops/action-gh-release@v2
with:
files: release-signed/*.exe
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -40,7 +40,7 @@ https://github.com/user-attachments/assets/9d60a3e8-4a1c-4b5e-acbb-26af2d3eabd1
- [Installation](#installation)
- [Deployment](#deployment)
- [Deploy to EdgeOne Pages](#deploy-to-edgeone-pages)
- [Deploy on Vercel (Recommended)](#deploy-on-vercel-recommended)
- [Deploy on Vercel](#deploy-on-vercel)
- [Deploy on Cloudflare Workers](#deploy-on-cloudflare-workers)
- [Multi-Provider Support](#multi-provider-support)
- [How It Works](#how-it-works)
@@ -185,7 +185,7 @@ Check out the [Tencent EdgeOne Pages documentation](https://pages.edgeone.ai/doc
Additionally, deploying through Tencent EdgeOne Pages will also grant you a [daily free quota for DeepSeek models](https://pages.edgeone.ai/document/edge-ai).
### Deploy on Vercel (Recommended)
### Deploy on Vercel
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FDayuanJiang%2Fnext-ai-draw-io)
@@ -211,6 +211,7 @@ See the [Next.js deployment documentation](https://nextjs.org/docs/app/building-
- OpenRouter
- DeepSeek
- SiliconFlow
- ModelScope
- SGLang
- Vercel AI Gateway

View File

@@ -10,18 +10,7 @@ export const metadata: Metadata = {
keywords: ["AI图表", "draw.io", "AWS架构", "GCP图表", "Azure图表", "LLM"],
}
function formatNumber(num: number): string {
if (num >= 1000) {
return `${num / 1000}k`
}
return num.toString()
}
export default function AboutCN() {
const dailyRequestLimit = Number(process.env.DAILY_REQUEST_LIMIT) || 20
const dailyTokenLimit = Number(process.env.DAILY_TOKEN_LIMIT) || 500000
const tpmLimit = Number(process.env.TPM_LIMIT) || 50000
return (
<div className="min-h-screen bg-gray-50">
{/* Navigation */}
@@ -108,42 +97,6 @@ export default function AboutCN() {
</p>
</div>
{/* Usage Limits */}
<p className="text-sm text-gray-600 mb-3">
使
</p>
<div className="grid grid-cols-3 gap-3 mb-5">
<div className="text-center p-3 bg-white/60 rounded-lg">
<p className="text-lg font-bold text-amber-600">
{formatNumber(dailyRequestLimit)}
</p>
<p className="text-xs text-gray-500">
/
</p>
</div>
<div className="text-center p-3 bg-white/60 rounded-lg">
<p className="text-lg font-bold text-amber-600">
{formatNumber(dailyTokenLimit)}
</p>
<p className="text-xs text-gray-500">
Token/
</p>
</div>
<div className="text-center p-3 bg-white/60 rounded-lg">
<p className="text-lg font-bold text-amber-600">
{formatNumber(tpmLimit)}
</p>
<p className="text-xs text-gray-500">
Token/
</p>
</div>
</div>
{/* Divider */}
<div className="flex items-center gap-3 my-5">
<div className="flex-1 h-px bg-gradient-to-r from-transparent via-amber-300 to-transparent" />
</div>
{/* Bring Your Own Key */}
<div className="text-center">
<h4 className="text-base font-bold text-gray-900 mb-2">
@@ -344,6 +297,7 @@ export default function AboutCN() {
<li>OpenRouter</li>
<li>DeepSeek</li>
<li>SiliconFlow</li>
<li>ModelScope</li>
</ul>
<p className="text-gray-700 mt-4">
<code>claude-sonnet-4-5</code>{" "}

View File

@@ -17,18 +17,7 @@ export const metadata: Metadata = {
],
}
function formatNumber(num: number): string {
if (num >= 1000) {
return `${num / 1000}k`
}
return num.toString()
}
export default function AboutJA() {
const dailyRequestLimit = Number(process.env.DAILY_REQUEST_LIMIT) || 20
const dailyTokenLimit = Number(process.env.DAILY_TOKEN_LIMIT) || 500000
const tpmLimit = Number(process.env.TPM_LIMIT) || 50000
return (
<div className="min-h-screen bg-gray-50">
{/* Navigation */}
@@ -116,42 +105,6 @@ export default function AboutJA() {
</p>
</div>
{/* Usage Limits */}
<p className="text-sm text-gray-600 mb-3">
使
</p>
<div className="grid grid-cols-3 gap-3 mb-5">
<div className="text-center p-3 bg-white/60 rounded-lg">
<p className="text-lg font-bold text-amber-600">
{formatNumber(dailyRequestLimit)}
</p>
<p className="text-xs text-gray-500">
/
</p>
</div>
<div className="text-center p-3 bg-white/60 rounded-lg">
<p className="text-lg font-bold text-amber-600">
{formatNumber(dailyTokenLimit)}
</p>
<p className="text-xs text-gray-500">
/
</p>
</div>
<div className="text-center p-3 bg-white/60 rounded-lg">
<p className="text-lg font-bold text-amber-600">
{formatNumber(tpmLimit)}
</p>
<p className="text-xs text-gray-500">
/
</p>
</div>
</div>
{/* Divider */}
<div className="flex items-center gap-3 my-5">
<div className="flex-1 h-px bg-gradient-to-r from-transparent via-amber-300 to-transparent" />
</div>
{/* Bring Your Own Key */}
<div className="text-center">
<h4 className="text-base font-bold text-gray-900 mb-2">
@@ -359,6 +312,7 @@ export default function AboutJA() {
<li>OpenRouter</li>
<li>DeepSeek</li>
<li>SiliconFlow</li>
<li>ModelScope</li>
</ul>
<p className="text-gray-700 mt-4">
<code>claude-sonnet-4-5</code>

View File

@@ -17,18 +17,7 @@ export const metadata: Metadata = {
],
}
function formatNumber(num: number): string {
if (num >= 1000) {
return `${num / 1000}k`
}
return num.toString()
}
export default function About() {
const dailyRequestLimit = Number(process.env.DAILY_REQUEST_LIMIT) || 20
const dailyTokenLimit = Number(process.env.DAILY_TOKEN_LIMIT) || 500000
const tpmLimit = Number(process.env.TPM_LIMIT) || 50000
return (
<div className="min-h-screen bg-gray-50">
{/* Navigation */}
@@ -118,42 +107,6 @@ export default function About() {
</p>
</div>
{/* Usage Limits */}
<p className="text-sm text-gray-600 mb-3">
Please note the current usage limits:
</p>
<div className="grid grid-cols-3 gap-3 mb-5">
<div className="text-center p-3 bg-white/60 rounded-lg">
<p className="text-lg font-bold text-amber-600">
{formatNumber(dailyRequestLimit)}
</p>
<p className="text-xs text-gray-500">
requests/day
</p>
</div>
<div className="text-center p-3 bg-white/60 rounded-lg">
<p className="text-lg font-bold text-amber-600">
{formatNumber(dailyTokenLimit)}
</p>
<p className="text-xs text-gray-500">
tokens/day
</p>
</div>
<div className="text-center p-3 bg-white/60 rounded-lg">
<p className="text-lg font-bold text-amber-600">
{formatNumber(tpmLimit)}
</p>
<p className="text-xs text-gray-500">
tokens/min
</p>
</div>
</div>
{/* Divider */}
<div className="flex items-center gap-3 my-5">
<div className="flex-1 h-px bg-gradient-to-r from-transparent via-amber-300 to-transparent" />
</div>
{/* Bring Your Own Key */}
<div className="text-center">
<h4 className="text-base font-bold text-gray-900 mb-2">
@@ -378,6 +331,7 @@ export default function About() {
<li>OpenRouter</li>
<li>DeepSeek</li>
<li>SiliconFlow</li>
<li>ModelScope</li>
</ul>
<p className="text-gray-700 mt-4">
Note that <code>claude-sonnet-4-5</code> has trained on

View File

@@ -274,6 +274,75 @@ export async function POST(req: Request) {
break
}
case "modelscope": {
const baseURL =
baseUrl || "https://api-inference.modelscope.cn/v1"
const startTime = Date.now()
try {
// Initiate a streaming request (required for QwQ-32B and certain Qwen3 models)
const response = await fetch(
`${baseURL}/chat/completions`,
{
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${apiKey}`,
},
body: JSON.stringify({
model: modelId,
messages: [
{ role: "user", content: "Say 'OK'" },
],
max_tokens: 20,
stream: true,
enable_thinking: false,
}),
},
)
if (!response.ok) {
const errorText = await response.text()
throw new Error(
`ModelScope API error (${response.status}): ${errorText}`,
)
}
const contentType =
response.headers.get("content-type") || ""
const isValidStreamingResponse =
response.status === 200 &&
(contentType.includes("text/event-stream") ||
contentType.includes("application/json"))
if (!isValidStreamingResponse) {
throw new Error(
`Unexpected response format: ${contentType}`,
)
}
const responseTime = Date.now() - startTime
if (response.body) {
response.body.cancel().catch(() => {
/* Ignore cancellation errors */
})
}
return NextResponse.json({
valid: true,
responseTime,
note: "ModelScope model validated (using streaming API)",
})
} catch (error) {
console.error(
"[validate-model] ModelScope validation failed:",
error,
)
throw error
}
}
default:
return NextResponse.json(
{ valid: false, error: `Unknown provider: ${provider}` },

View File

@@ -347,6 +347,12 @@ export function ChatInput({
setShowUrlDialog(false)
} catch (error) {
// Remove the URL from the data map on error
const newUrlData = urlData
? new Map(urlData)
: new Map<string, UrlData>()
newUrlData.delete(url)
onUrlChange(newUrlData)
showErrorToast(
<span className="text-muted-foreground">
{error instanceof Error

View File

@@ -79,6 +79,7 @@ const PROVIDER_LOGO_MAP: Record<string, string> = {
gateway: "vercel",
edgeone: "tencent-cloud",
doubao: "bytedance",
modelscope: "modelscope",
}
// Provider logo component

View File

@@ -50,6 +50,7 @@ const PROVIDER_LOGO_MAP: Record<string, string> = {
gateway: "vercel",
edgeone: "tencent-cloud",
doubao: "bytedance",
modelscope: "modelscope",
}
// Group models by providerLabel (handles duplicate providers)

View File

@@ -3,6 +3,7 @@
import { Github, Info, Moon, Sun, Tag } from "lucide-react"
import { usePathname, useRouter, useSearchParams } from "next/navigation"
import { Suspense, useEffect, useState } from "react"
import { toast } from "sonner"
import { Button } from "@/components/ui/button"
import {
Dialog,
@@ -103,6 +104,11 @@ function SettingsContent({
)
const [currentLang, setCurrentLang] = useState("en")
// Proxy settings state (Electron only)
const [httpProxy, setHttpProxy] = useState("")
const [httpsProxy, setHttpsProxy] = useState("")
const [isApplyingProxy, setIsApplyingProxy] = useState(false)
useEffect(() => {
// Only fetch if not cached in localStorage
if (getStoredAccessCodeRequired() !== null) return
@@ -150,6 +156,14 @@ function SettingsContent({
setCloseProtection(storedCloseProtection !== "false")
setError("")
// Load proxy settings (Electron only)
if (window.electronAPI?.getProxy) {
window.electronAPI.getProxy().then((config) => {
setHttpProxy(config.httpProxy || "")
setHttpsProxy(config.httpsProxy || "")
})
}
}
}, [open])
@@ -208,6 +222,46 @@ function SettingsContent({
}
}
const handleApplyProxy = async () => {
if (!window.electronAPI?.setProxy) return
// Validate proxy URLs (must start with http:// or https://)
const validateProxyUrl = (url: string): boolean => {
if (!url) return true // Empty is OK
return url.startsWith("http://") || url.startsWith("https://")
}
const trimmedHttp = httpProxy.trim()
const trimmedHttps = httpsProxy.trim()
if (trimmedHttp && !validateProxyUrl(trimmedHttp)) {
toast.error("HTTP Proxy must start with http:// or https://")
return
}
if (trimmedHttps && !validateProxyUrl(trimmedHttps)) {
toast.error("HTTPS Proxy must start with http:// or https://")
return
}
setIsApplyingProxy(true)
try {
const result = await window.electronAPI.setProxy({
httpProxy: trimmedHttp || undefined,
httpsProxy: trimmedHttps || undefined,
})
if (result.success) {
toast.success(dict.settings.proxyApplied)
} else {
toast.error(result.error || "Failed to apply proxy settings")
}
} catch {
toast.error("Failed to apply proxy settings")
} finally {
setIsApplyingProxy(false)
}
}
return (
<DialogContent className="sm:max-w-lg p-0 gap-0">
{/* Header */}
@@ -370,6 +424,54 @@ function SettingsContent({
</span>
</div>
</SettingItem>
{/* Proxy Settings - Electron only */}
{typeof window !== "undefined" &&
window.electronAPI?.isElectron && (
<div className="py-4 space-y-3">
<div className="space-y-0.5">
<Label className="text-sm font-medium">
{dict.settings.proxy}
</Label>
<p className="text-xs text-muted-foreground">
{dict.settings.proxyDescription}
</p>
</div>
<div className="space-y-2">
<Input
id="http-proxy"
type="text"
value={httpProxy}
onChange={(e) =>
setHttpProxy(e.target.value)
}
placeholder={`${dict.settings.httpProxy}: http://proxy:8080`}
className="h-9"
/>
<Input
id="https-proxy"
type="text"
value={httpsProxy}
onChange={(e) =>
setHttpsProxy(e.target.value)
}
placeholder={`${dict.settings.httpsProxy}: http://proxy:8080`}
className="h-9"
/>
</div>
<Button
onClick={handleApplyProxy}
disabled={isApplyingProxy}
className="h-9 px-4 rounded-xl w-full"
>
{isApplyingProxy
? "..."
: dict.settings.applyProxy}
</Button>
</div>
)}
</div>
</div>

View File

@@ -35,14 +35,14 @@ export function UrlInputDialog({
setError("")
if (!url.trim()) {
setError("Please enter a URL")
setError(dict.url.enterUrl)
return
}
try {
new URL(url)
} catch {
setError("Invalid URL format")
setError(dict.url.invalidFormat)
return
}

View File

@@ -37,7 +37,7 @@ https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
- [安装](#安装)
- [部署](#部署)
- [部署到腾讯云EdgeOne Pages](#部署到腾讯云edgeone-pages)
- [部署到Vercel(推荐)](#部署到vercel推荐)
- [部署到Vercel](#部署到vercel)
- [部署到Cloudflare Workers](#部署到cloudflare-workers)
- [多提供商支持](#多提供商支持)
- [工作原理](#工作原理)
@@ -179,7 +179,7 @@ npm run dev
同时通过腾讯云EdgeOne Pages部署也会获得[每日免费的DeepSeek模型额度](https://edgeone.cloud.tencent.com/pages/document/169925463311781888)。
### 部署到Vercel(推荐)
### 部署到Vercel
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FDayuanJiang%2Fnext-ai-draw-io)
@@ -204,6 +204,7 @@ npm run dev
- OpenRouter
- DeepSeek
- SiliconFlow
- ModelScope
- SGLang
- Vercel AI Gateway

View File

@@ -152,6 +152,19 @@ AI_PROVIDER=ollama
AI_MODEL=llama3.2
```
### ModelScope
```bash
MODELSCOPE_API_KEY=your_api_key
AI_MODEL=Qwen/Qwen3-235B-A22B-Instruct-2507
```
可选的自定义端点:
```bash
MODELSCOPE_BASE_URL=https://your-custom-endpoint
```
可选的自定义 URL
```bash

View File

@@ -158,6 +158,19 @@ Optional custom URL:
OLLAMA_BASE_URL=http://localhost:11434
```
### ModelScope
```bash
MODELSCOPE_API_KEY=your_api_key
AI_MODEL=Qwen/Qwen3-235B-A22B-Instruct-2507
```
Optional custom endpoint:
```bash
MODELSCOPE_BASE_URL=https://your-custom-endpoint
```
### Vercel AI Gateway
Vercel AI Gateway provides unified access to multiple AI providers through a single API key. This simplifies authentication and allows you to switch between providers without managing multiple API keys.
@@ -201,7 +214,7 @@ If you only configure **one** provider's API key, the system will automatically
If you configure **multiple** API keys, you must explicitly set `AI_PROVIDER`:
```bash
AI_PROVIDER=google # or: openai, anthropic, deepseek, siliconflow, doubao, azure, bedrock, openrouter, ollama, gateway, sglang
AI_PROVIDER=google # or: openai, anthropic, deepseek, siliconflow, doubao, azure, bedrock, openrouter, ollama, gateway, sglang, modelscope
```
## Model Capability Requirements

View File

@@ -37,7 +37,7 @@ https://github.com/user-attachments/assets/b2eef5f3-b335-4e71-a755-dc2e80931979
- [インストール](#インストール)
- [デプロイ](#デプロイ)
- [EdgeOne Pagesへのデプロイ](#edgeone-pagesへのデプロイ)
- [Vercelへのデプロイ(推奨)](#vercelへのデプロイ推奨)
- [Vercelへのデプロイ](#vercelへのデプロイ)
- [Cloudflare Workersへのデプロイ](#cloudflare-workersへのデプロイ)
- [マルチプロバイダーサポート](#マルチプロバイダーサポート)
- [仕組み](#仕組み)
@@ -180,7 +180,7 @@ npm run dev
また、Tencent EdgeOne Pagesでデプロイすると、[DeepSeekモデルの毎日の無料クォータ](https://pages.edgeone.ai/document/edge-ai)が付与されます。
### Vercelへのデプロイ(推奨)
### Vercelへのデプロイ
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FDayuanJiang%2Fnext-ai-draw-io)
@@ -205,6 +205,7 @@ Next.jsアプリをデプロイする最も簡単な方法は、Next.jsの作成
- OpenRouter
- DeepSeek
- SiliconFlow
- ModelScope
- SGLang
- Vercel AI Gateway

View File

@@ -158,6 +158,19 @@ AI_MODEL=llama3.2
OLLAMA_BASE_URL=http://localhost:11434
```
### ModelScope
```bash
MODELSCOPE_API_KEY=your_api_key
AI_MODEL=Qwen/Qwen3-235B-A22B-Instruct-2507
```
任意のカスタムエンドポイント:
```bash
MODELSCOPE_BASE_URL=https://your-custom-endpoint
```
### Vercel AI Gateway
Vercel AI Gateway は、単一の API キーで複数の AI プロバイダーへの統合アクセスを提供します。これにより認証が簡素化され、複数の API キーを管理することなくプロバイダーを切り替えることができます。

View File

@@ -25,6 +25,19 @@ interface ApplyPresetResult {
env?: Record<string, string>
}
/** Proxy configuration interface */
interface ProxyConfig {
httpProxy?: string
httpsProxy?: string
}
/** Result of setting proxy */
interface SetProxyResult {
success: boolean
error?: string
devMode?: boolean
}
declare global {
interface Window {
/** Main window Electron API */
@@ -45,6 +58,10 @@ declare global {
openFile: () => Promise<string | null>
/** Save data to file via save dialog */
saveFile: (data: string) => Promise<boolean>
/** Get proxy configuration */
getProxy: () => Promise<ProxyConfig>
/** Set proxy configuration (saves and restarts server) */
setProxy: (config: ProxyConfig) => Promise<SetProxyResult>
}
/** Settings window Electron API */
@@ -71,4 +88,4 @@ declare global {
}
}
export { ConfigPreset, ApplyPresetResult }
export { ConfigPreset, ApplyPresetResult, ProxyConfig, SetProxyResult }

View File

@@ -351,6 +351,10 @@ const PROVIDER_ENV_MAP: Record<string, { apiKey: string; baseUrl: string }> = {
apiKey: "SILICONFLOW_API_KEY",
baseUrl: "SILICONFLOW_BASE_URL",
},
modelscope: {
apiKey: "MODELSCOPE_API_KEY",
baseUrl: "MODELSCOPE_BASE_URL",
},
gateway: { apiKey: "AI_GATEWAY_API_KEY", baseUrl: "AI_GATEWAY_BASE_URL" },
// bedrock and ollama don't use API keys in the same way
bedrock: { apiKey: "", baseUrl: "" },

View File

@@ -4,6 +4,7 @@ import { getCurrentPresetEnv } from "./config-manager"
import { loadEnvFile } from "./env-loader"
import { registerIpcHandlers } from "./ipc-handlers"
import { startNextServer, stopNextServer } from "./next-server"
import { applyProxyToEnv } from "./proxy-manager"
import { registerSettingsWindowHandlers } from "./settings-window"
import { createWindow, getMainWindow } from "./window-manager"
@@ -24,6 +25,9 @@ if (!gotTheLock) {
// Load environment variables from .env files
loadEnvFile()
// Apply proxy settings from saved config
applyProxyToEnv()
// Apply saved preset environment variables (overrides .env)
const presetEnv = getCurrentPresetEnv()
for (const [key, value] of Object.entries(presetEnv)) {

View File

@@ -11,6 +11,12 @@ import {
updatePreset,
} from "./config-manager"
import { restartNextServer } from "./next-server"
import {
applyProxyToEnv,
getProxyConfig,
type ProxyConfig,
saveProxyConfig,
} from "./proxy-manager"
/**
* Allowed configuration keys for presets
@@ -209,4 +215,40 @@ export function registerIpcHandlers(): void {
return setCurrentPreset(id)
},
)
// ==================== Proxy Settings ====================
ipcMain.handle("get-proxy", () => {
return getProxyConfig()
})
ipcMain.handle("set-proxy", async (_event, config: ProxyConfig) => {
try {
// Save config to file
saveProxyConfig(config)
// Apply to current process environment
applyProxyToEnv()
const isDev = process.env.NODE_ENV === "development"
if (isDev) {
// In development, env vars are already applied
// Next.js dev server may need manual restart
return { success: true, devMode: true }
}
// Production: restart Next.js server to pick up new env vars
await restartNextServer()
return { success: true }
} catch (error) {
return {
success: false,
error:
error instanceof Error
? error.message
: "Failed to apply proxy settings",
}
}
})
}

View File

@@ -69,6 +69,8 @@ export async function startNextServer(): Promise<string> {
NODE_ENV: "production",
PORT: String(port),
HOSTNAME: "localhost",
// Enable Node.js built-in proxy support for fetch (Node.js 24+)
NODE_USE_ENV_PROXY: "1",
}
// Set cache directory to a writable location (user's app data folder)
@@ -85,6 +87,13 @@ export async function startNextServer(): Promise<string> {
}
}
// Debug: log proxy-related env vars
console.log("Proxy env vars being passed to server:", {
HTTP_PROXY: env.HTTP_PROXY || env.http_proxy || "not set",
HTTPS_PROXY: env.HTTPS_PROXY || env.https_proxy || "not set",
NODE_USE_ENV_PROXY: env.NODE_USE_ENV_PROXY || "not set",
})
// Use Electron's utilityProcess API for running Node.js in background
// This is the recommended way to run Node.js code in Electron
serverProcess = utilityProcess.fork(serverPath, [], {
@@ -114,13 +123,41 @@ export async function startNextServer(): Promise<string> {
}
/**
* Stop the Next.js server process
* Stop the Next.js server process and wait for it to exit
*/
export function stopNextServer(): void {
export async function stopNextServer(): Promise<void> {
if (serverProcess) {
console.log("Stopping Next.js server...")
// Create a promise that resolves when the process exits
const exitPromise = new Promise<void>((resolve) => {
const proc = serverProcess
if (!proc) {
resolve()
return
}
const onExit = () => {
resolve()
}
proc.once("exit", onExit)
// Timeout after 5 seconds
setTimeout(() => {
proc.removeListener("exit", onExit)
resolve()
}, 5000)
})
serverProcess.kill()
serverProcess = null
// Wait for process to exit
await exitPromise
// Additional wait for OS to release port
await new Promise((resolve) => setTimeout(resolve, 500))
}
}
@@ -150,8 +187,8 @@ async function waitForServerStop(timeout = 5000): Promise<void> {
export async function restartNextServer(): Promise<string> {
console.log("Restarting Next.js server...")
// Stop the current server
stopNextServer()
// Stop the current server and wait for it to exit
await stopNextServer()
// Wait for the port to be released
await waitForServerStop()

View File

@@ -0,0 +1,75 @@
import { app } from "electron"
import * as fs from "fs"
import * as path from "path"
import type { ProxyConfig } from "../electron.d"
export type { ProxyConfig }
const CONFIG_FILE = "proxy-config.json"
function getConfigPath(): string {
return path.join(app.getPath("userData"), CONFIG_FILE)
}
/**
* Load proxy configuration from JSON file
*/
export function loadProxyConfig(): ProxyConfig {
try {
const configPath = getConfigPath()
if (fs.existsSync(configPath)) {
const data = fs.readFileSync(configPath, "utf-8")
return JSON.parse(data) as ProxyConfig
}
} catch (error) {
console.error("Failed to load proxy config:", error)
}
return {}
}
/**
* Save proxy configuration to JSON file
*/
export function saveProxyConfig(config: ProxyConfig): void {
try {
const configPath = getConfigPath()
fs.writeFileSync(configPath, JSON.stringify(config, null, 2), "utf-8")
} catch (error) {
console.error("Failed to save proxy config:", error)
throw error
}
}
/**
* Apply proxy configuration to process.env
* Must be called BEFORE starting the Next.js server
*/
export function applyProxyToEnv(): void {
const config = loadProxyConfig()
if (config.httpProxy) {
process.env.HTTP_PROXY = config.httpProxy
process.env.http_proxy = config.httpProxy
} else {
delete process.env.HTTP_PROXY
delete process.env.http_proxy
}
if (config.httpsProxy) {
process.env.HTTPS_PROXY = config.httpsProxy
process.env.https_proxy = config.httpsProxy
} else {
delete process.env.HTTPS_PROXY
delete process.env.https_proxy
}
}
/**
* Get current proxy configuration (from process.env)
*/
export function getProxyConfig(): ProxyConfig {
return {
httpProxy: process.env.HTTP_PROXY || process.env.http_proxy || "",
httpsProxy: process.env.HTTPS_PROXY || process.env.https_proxy || "",
}
}

View File

@@ -21,4 +21,9 @@ contextBridge.exposeInMainWorld("electronAPI", {
// File operations
openFile: () => ipcRenderer.invoke("dialog-open-file"),
saveFile: (data: string) => ipcRenderer.invoke("dialog-save-file", data),
// Proxy settings
getProxy: () => ipcRenderer.invoke("get-proxy"),
setProxy: (config: { httpProxy?: string; httpsProxy?: string }) =>
ipcRenderer.invoke("set-proxy", config),
})

View File

@@ -55,6 +55,7 @@
<option value="openrouter">OpenRouter</option>
<option value="deepseek">DeepSeek</option>
<option value="siliconflow">SiliconFlow</option>
<option value="modelscope">ModelScope</option>
<option value="ollama">Ollama (Local)</option>
</select>
</div>

View File

@@ -288,6 +288,7 @@ function getProviderLabel(provider) {
openrouter: "OpenRouter",
deepseek: "DeepSeek",
siliconflow: "SiliconFlow",
modelscope: "ModelScope",
ollama: "Ollama",
}
return labels[provider] || provider

View File

@@ -72,6 +72,10 @@ AI_MODEL=global.anthropic.claude-sonnet-4-5-20250929-v1:0
# SGLANG_API_KEY=your-sglang-api-key
# SGLANG_BASE_URL=http://127.0.0.1:8000/v1 # Your SGLang endpoint
# ModelScope Configuration
# MODELSCOPE_API_KEY=ms-...
# MODELSCOPE_BASE_URL=https://api-inference.modelscope.cn/v1 # Optional: Custom endpoint
# ByteDance Doubao Configuration (via Volcengine)
# DOUBAO_API_KEY=your-doubao-api-key
# DOUBAO_BASE_URL=https://ark.cn-beijing.volces.com/api/v3 # ByteDance Volcengine endpoint

View File

@@ -23,6 +23,7 @@ export type ProviderName =
| "gateway"
| "edgeone"
| "doubao"
| "modelscope"
interface ModelConfig {
model: any
@@ -59,6 +60,7 @@ const ALLOWED_CLIENT_PROVIDERS: ProviderName[] = [
"gateway",
"edgeone",
"doubao",
"modelscope",
]
// Bedrock provider options for Anthropic beta features
@@ -353,6 +355,7 @@ function buildProviderOptions(
case "siliconflow":
case "sglang":
case "gateway":
case "modelscope":
case "doubao": {
// These providers don't have reasoning configs in AI SDK yet
// Gateway passes through to underlying providers which handle their own configs
@@ -381,6 +384,7 @@ const PROVIDER_ENV_VARS: Record<ProviderName, string | null> = {
gateway: "AI_GATEWAY_API_KEY",
edgeone: null, // No credentials needed - uses EdgeOne Edge AI
doubao: "DOUBAO_API_KEY",
modelscope: "MODELSCOPE_API_KEY",
}
/**
@@ -445,7 +449,7 @@ function validateProviderCredentials(provider: ProviderName): void {
* Get the AI model based on environment variables
*
* Environment variables:
* - AI_PROVIDER: The provider to use (bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek, siliconflow, sglang, gateway)
* - AI_PROVIDER: The provider to use (bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek, siliconflow, sglang, gateway, modelscope)
* - AI_MODEL: The model ID/name for the selected provider
*
* Provider-specific env vars:
@@ -463,6 +467,8 @@ function validateProviderCredentials(provider: ProviderName): void {
* - SILICONFLOW_BASE_URL: SiliconFlow endpoint (optional, defaults to https://api.siliconflow.com/v1)
* - SGLANG_API_KEY: SGLang API key
* - SGLANG_BASE_URL: SGLang endpoint (optional)
* - MODELSCOPE_API_KEY: ModelScope API key
* - MODELSCOPE_BASE_URL: ModelScope endpoint (optional)
*/
export function getAIModel(overrides?: ClientOverrides): ModelConfig {
// SECURITY: Prevent SSRF attacks (GHSA-9qf7-mprq-9qgm)
@@ -537,6 +543,7 @@ export function getAIModel(overrides?: ClientOverrides): ModelConfig {
`- AZURE_API_KEY for Azure\n` +
`- SILICONFLOW_API_KEY for SiliconFlow\n` +
`- SGLANG_API_KEY for SGLang\n` +
`- MODELSCOPE_API_KEY for ModelScope\n` +
`Or set AI_PROVIDER=ollama for local Ollama.`,
)
} else {
@@ -892,9 +899,23 @@ export function getAIModel(overrides?: ClientOverrides): ModelConfig {
break
}
case "modelscope": {
const apiKey = overrides?.apiKey || process.env.MODELSCOPE_API_KEY
const baseURL =
overrides?.baseUrl ||
process.env.MODELSCOPE_BASE_URL ||
"https://api-inference.modelscope.cn/v1"
const modelscopeProvider = createOpenAI({
apiKey,
baseURL,
})
model = modelscopeProvider.chat(modelId)
break
}
default:
throw new Error(
`Unknown AI provider: ${provider}. Supported providers: bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek, siliconflow, sglang, gateway, edgeone, doubao`,
`Unknown AI provider: ${provider}. Supported providers: bedrock, openai, anthropic, google, azure, ollama, openrouter, deepseek, siliconflow, sglang, gateway, edgeone, doubao, modelscope`,
)
}

View File

@@ -28,7 +28,8 @@
"azure": "Azure OpenAI",
"openrouter": "OpenRouter",
"deepseek": "DeepSeek",
"siliconflow": "SiliconFlow"
"siliconflow": "SiliconFlow",
"modelscope": "ModelScope"
},
"chat": {
"placeholder": "Describe your diagram or upload a file...",
@@ -106,7 +107,13 @@
"diagramActions": "Diagram Actions",
"diagramActionsDescription": "Manage diagram history and exports",
"history": "History",
"download": "Download"
"download": "Download",
"proxy": "Proxy Settings",
"proxyDescription": "Configure HTTP/HTTPS proxy for API requests (Desktop only)",
"httpProxy": "HTTP Proxy",
"httpsProxy": "HTTPS Proxy",
"applyProxy": "Apply",
"proxyApplied": "Proxy settings applied"
},
"save": {
"title": "Save Diagram",
@@ -192,7 +199,9 @@
"description": "Paste a URL to extract and analyze its content",
"Extracting": "Extracting...",
"extract": "Extract",
"Cancel": "Cancel"
"Cancel": "Cancel",
"enterUrl": "Please enter a URL",
"invalidFormat": "Invalid URL format"
},
"reasoning": {
"thinking": "Thinking...",

View File

@@ -28,7 +28,8 @@
"azure": "Azure OpenAI",
"openrouter": "OpenRouter",
"deepseek": "DeepSeek",
"siliconflow": "SiliconFlow"
"siliconflow": "SiliconFlow",
"modelscope": "ModelScope"
},
"chat": {
"placeholder": "ダイアグラムを説明するか、ファイルをアップロード...",
@@ -106,7 +107,13 @@
"diagramActions": "ダイアグラム操作",
"diagramActionsDescription": "ダイアグラムの履歴とエクスポートを管理",
"history": "履歴",
"download": "ダウンロード"
"download": "ダウンロード",
"proxy": "プロキシ設定",
"proxyDescription": "API リクエスト用の HTTP/HTTPS プロキシを設定(デスクトップ版のみ)",
"httpProxy": "HTTP プロキシ",
"httpsProxy": "HTTPS プロキシ",
"applyProxy": "適用",
"proxyApplied": "プロキシ設定が適用されました"
},
"save": {
"title": "ダイアグラムを保存",
@@ -192,7 +199,9 @@
"description": "URLを貼り付けてそのコンテンツを抽出および分析します",
"Extracting": "抽出中...",
"extract": "抽出",
"Cancel": "キャンセル"
"Cancel": "キャンセル",
"enterUrl": "URLを入力してください",
"invalidFormat": "無効なURL形式です"
},
"reasoning": {
"thinking": "考え中...",

View File

@@ -28,7 +28,8 @@
"azure": "Azure OpenAI",
"openrouter": "OpenRouter",
"deepseek": "DeepSeek",
"siliconflow": "SiliconFlow"
"siliconflow": "SiliconFlow",
"modelscope": "ModelScope"
},
"chat": {
"placeholder": "描述您的图表或上传文件...",
@@ -106,7 +107,13 @@
"diagramActions": "图表操作",
"diagramActionsDescription": "管理图表历史记录和导出",
"history": "历史记录",
"download": "下载"
"download": "下载",
"proxy": "代理设置",
"proxyDescription": "配置 API 请求的 HTTP/HTTPS 代理(仅桌面版)",
"httpProxy": "HTTP 代理",
"httpsProxy": "HTTPS 代理",
"applyProxy": "应用",
"proxyApplied": "代理设置已应用"
},
"save": {
"title": "保存图表",
@@ -192,7 +199,9 @@
"description": "粘贴 URL 以提取和分析其内容",
"Extracting": "提取中...",
"extract": "提取",
"Cancel": "取消"
"Cancel": "取消",
"enterUrl": "请输入 URL",
"invalidFormat": "URL 格式无效"
},
"reasoning": {
"thinking": "思考中...",

View File

@@ -13,6 +13,7 @@ export type ProviderName =
| "gateway"
| "edgeone"
| "doubao"
| "modelscope"
// Individual model configuration
export interface ModelConfig {
@@ -91,6 +92,10 @@ export const PROVIDER_INFO: Record<
label: "Doubao (ByteDance)",
defaultBaseUrl: "https://ark.cn-beijing.volces.com/api/v3",
},
modelscope: {
label: "ModelScope",
defaultBaseUrl: "https://api-inference.modelscope.cn/v1",
},
}
// Suggested models per provider for quick add
@@ -231,6 +236,17 @@ export const SUGGESTED_MODELS: Record<ProviderName, string[]> = {
"doubao-pro-32k-241215",
"doubao-pro-256k-241215",
],
modelscope: [
// Qwen
"Qwen/Qwen2.5-72B-Instruct",
"Qwen/Qwen2.5-32B-Instruct",
"Qwen/Qwen3-235B-A22B-Instruct-2507",
"Qwen/Qwen3-VL-235B-A22B-Instruct",
"Qwen/Qwen3-32B",
// DeepSeek
"deepseek-ai/DeepSeek-R1-0528",
"deepseek-ai/DeepSeek-V3.2",
],
}
// Helper to generate UUID

View File

@@ -1,6 +1,6 @@
{
"name": "next-ai-draw-io",
"version": "0.4.9",
"version": "0.4.10",
"license": "Apache-2.0",
"private": true,
"main": "dist-electron/main/index.js",
@@ -24,6 +24,7 @@
"dist": "npm run electron:build && npm run electron:prepare && npx electron-builder --config electron/electron-builder.yml",
"dist:mac": "npm run electron:build && npm run electron:prepare && npx electron-builder --config electron/electron-builder.yml --mac",
"dist:win": "npm run electron:build && npm run electron:prepare && npx electron-builder --config electron/electron-builder.yml --win",
"dist:win:build": "npm run electron:build && npm run electron:prepare && npx electron-builder --config electron/electron-builder.yml --win --publish never",
"dist:linux": "npm run electron:build && npm run electron:prepare && npx electron-builder --config electron/electron-builder.yml --linux",
"dist:all": "npm run electron:build && npm run electron:prepare && npx electron-builder --config electron/electron-builder.yml --mac --win --linux",
"test": "vitest",
@@ -72,6 +73,7 @@
"jsonrepair": "^3.13.1",
"lucide-react": "^0.562.0",
"motion": "^12.23.25",
"nanoid": "^3.3.11",
"negotiator": "^1.0.0",
"next": "^16.0.7",
"ollama-ai-provider-v2": "^2.0.0",

View File

@@ -1,6 +1,6 @@
{
"name": "@next-ai-drawio/mcp-server",
"version": "0.1.11",
"version": "0.1.12",
"description": "MCP server for Next AI Draw.io - AI-powered diagram generation with real-time browser preview",
"type": "module",
"main": "dist/index.js",
@@ -21,16 +21,16 @@
"claude",
"model-context-protocol"
],
"author": "Biki-dev",
"author": "DayuanJiang",
"license": "Apache-2.0",
"repository": {
"type": "git",
"url": "https://github.com/Biki-dev/next-ai-draw-io",
"url": "https://github.com/DayuanJiang/next-ai-draw-io",
"directory": "packages/mcp-server"
},
"homepage": "https://next-ai-drawio.jiang.jp",
"bugs": {
"url": "https://github.com/Biki-dev/next-ai-draw-io/issues"
"url": "https://github.com/DayuanJiang/next-ai-draw-io/issues"
},
"publishConfig": {
"access": "public"