Commit Graph

20 Commits

Author SHA1 Message Date
Ted Cao
98b890bb06 feat: add Vercel AI Gateway support (#274)
* feat: add Vercel AI Gateway support

- Updated environment configuration to include AI_GATEWAY_API_KEY for unified access to multiple AI providers.
- Added gateway provider to the list of supported AI providers in the codebase.
- Enhanced documentation to explain the usage of Vercel AI Gateway and its model format.

This change simplifies authentication and allows users to switch between providers seamlessly.

* Update package
@ai-sdk/gateway to latest version 2.0.21
2025-12-17 12:43:33 +09:00
Dayuan Jiang
8b9336466f feat: make PDF/text extraction char limit configurable via env (#214)
Add NEXT_PUBLIC_MAX_EXTRACTED_CHARS environment variable to allow
configuring the maximum characters extracted from PDF and text files.
Defaults to 150000 (150k chars) if not set.
2025-12-11 14:14:31 +09:00
Dayuan Jiang
ee514efa9e fix: implement AZURE_RESOURCE_NAME config for Azure OpenAI (#213)
Previously AZURE_RESOURCE_NAME was documented in env.example but not
actually used in the code. This caused Azure OpenAI configuration to fail
when users set AZURE_RESOURCE_NAME instead of AZURE_BASE_URL.

Changes:
- Read AZURE_RESOURCE_NAME from environment and pass to createAzure()
- resourceName constructs endpoint: https://{name}.openai.azure.com/openai/v1
- baseURL takes precedence over resourceName when both are set
- Updated env.example with clearer documentation

Fixes #208
2025-12-11 13:32:33 +09:00
Biki Kalita
a047a6ff97 feat: Display AI reasoning/thinking blocks in chat interface (#152)
* feat: Add reasoning/thinking blocks display in chat interface

* feat: add multi-provider options support and replace custom reasoning UI with AI Elements

* resolve conflicting reasoning configs and correct provider-specific reasoning parameters

* try to solve conflict

* fix: simplify reasoning display and remove unnecessary dependencies

- Remove Streamdown dependency (~5MB) - reasoning is plain text only
- Fix Bedrock providerOptions merging for Claude reasoning configs
- Remove unsupported DeepSeek reasoning configuration
- Clean up unused environment variables (REASONING_BUDGET_TOKENS, REASONING_EFFORT, DEEPSEEK_REASONING_*)
- Remove dead commented code from route.ts

Reasoning blocks contain plain thinking text and don't need markdown/diagram/code rendering.

* feat: comprehensive reasoning support improvements

Major improvements:
- Auto-enable reasoning display for all supported models
- Fix provider-specific reasoning configurations
- Remove unnecessary Streamdown dependency (~5MB)
- Clean up debug logging

Provider changes:
- OpenAI: Auto-enable reasoningSummary for o1/o3/gpt-5 models
- Google: Auto-enable includeThoughts for Gemini 2.5/3 models
- Bedrock: Restrict reasoningConfig to only Claude/Nova (fixes MiniMax error)
- Ollama: Add thinking support for qwen3-like models

Other improvements:
- Remove ENABLE_REASONING toggle (always enabled)
- Fix Bedrock providerOptions merging for Claude
- Simplify reasoning component (plain text rendering)
- Clean up unused environment variables

* fix: critical bugs and documentation gaps in reasoning support

Critical fixes:
- Fix Bedrock shallow merge bug (deep merge preserves anthropicBeta + reasoningConfig)
- Add parseInt validation with parseIntSafe helper (prevents NaN errors)
- Validate all numeric env vars with min/max ranges

Documentation improvements:
- Add BEDROCK_REASONING_BUDGET_TOKENS and BEDROCK_REASONING_EFFORT to env.example
- Add OLLAMA_ENABLE_THINKING to env.example
- Update JSDoc with accurate env var list and ranges

Code cleanup:
- Remove debug console.log statements from route.ts
- Refactor duplicate providerOptions assignments

---------

Co-authored-by: Dayuan Jiang <34411969+DayuanJiang@users.noreply.github.com>
Co-authored-by: Dayuan Jiang <jdy.toh@gmail.com>
2025-12-11 00:24:43 +09:00
Dayuan Jiang
d2ba133eaf feat: add PDF and text file upload support (#205)
- Add client-side PDF text extraction using unpdf library
- Support text files (.txt, .md, .json, .csv, .py, .js, .ts, etc.)
- Add file preview with character count for PDF/text files
- Add 150k character limit for extracted content
- Highlight Paper to Diagram example with NEW badge
- Fix React hydration error by adding explicit IDs to ResizablePanelGroup
- Remove code duplication by centralizing file utilities in pdf-utils.ts
2025-12-10 21:32:35 +09:00
terrydash
97bb350dd6 feat: add support for self-hosted draw.io via DRAWIO_BASE_URL environment variable (#163)
## Summary
Add support for self-hosted draw.io instances via build-time configuration.

## Problem
In some corporate environments, `embed.diagrams.net` is blocked by network policies. 
Users cannot use the application without access to the default draw.io embed URL.

## Solution
- Add `NEXT_PUBLIC_DRAWIO_BASE_URL` environment variable support
- Pass the `baseUrl` prop to the `DrawIoEmbed` component
- Configure Dockerfile to accept build-time argument for the draw.io URL

## Usage
```yaml
# docker-compose.yaml
services:
  drawio:
    image: jgraph/drawio:latest
    ports:
      - "8080:8080"
  
  next-ai-draw-io:
    build:
      context: .
      args:
        - NEXT_PUBLIC_DRAWIO_BASE_URL=http://drawio:8080
```

Or build directly:
```bash
docker build --build-arg NEXT_PUBLIC_DRAWIO_BASE_URL=http://localhost:8080 -t next-ai-draw-io .
```

**Note:** This is a build-time configuration. To change the draw.io URL, you need to rebuild the Docker image.
2025-12-09 22:00:54 +09:00
QiyuanChen
d8cdd049d1 feat: add SiliconFlow as a supported AI provider (#137)
* feat: add SiliconFlow as a supported AI provider in documentation and configuration

* fix: update SiliconFlow configuration comment to English
2025-12-07 10:22:57 +09:00
Biki Kalita
8b578a456e fix: Remove hardcoded temperature parameter to support models that don't support it (#133)
* Fix: remove hardcoded temperature parameter to support reasoning models

* feat: make temperature configurable via AI_TEMPERATURE env var

- Instead of removing temperature entirely, make it optional via env var
- Set AI_TEMPERATURE=0 for deterministic output (recommended for diagrams)
- Leave unset for models that don't support temperature (e.g., GPT-5.1 reasoning)

* docs: add AI_TEMPERATURE env var documentation

- Update env.example with AI_TEMPERATURE option
- Update README.md configuration section
- Add Temperature Setting section in ai-providers.md

* docs: add TEMPERATURE env var documentation

- Update env.example with TEMPERATURE option
- Update README.md, README_CN.md, README_JA.md configuration sections
- Add Temperature Setting section in ai-providers.md
- Update route.ts to use TEMPERATURE env var

---------

Co-authored-by: dayuan.jiang <jiangdy@amazon.co.jp>
2025-12-07 01:34:59 +09:00
Dayuan Jiang
8d898d8adc fix: revert maxDuration to static value (Next.js requirement) (#121) 2025-12-06 18:04:23 +09:00
Dayuan Jiang
1e0b1ed970 feat: make maxDuration configurable via MAX_DURATION env (#120) 2025-12-06 17:47:50 +09:00
Twelveeee
3fb349fb3e clear button cant clear error msg & feat: add setting dialog and add accesscode (#77)
* fix: clear button cant clear error msg

* new: add setting dialog and add accesscode

* fix: address review feedback - dark mode, types, formatting

* feat: only show Settings button when access code is required

* refactor: rename ACCESS_CODES to ACCESS_CODE_LIST

---------

Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2025-12-05 22:09:34 +09:00
Dayuan Jiang
ed29e32ba3 feat: restore Langfuse observability integration (#103)
- Add lib/langfuse.ts with client, trace input/output, telemetry config
- Add instrumentation.ts for OpenTelemetry setup with Langfuse span processor
- Add /api/log-save endpoint for logging diagram saves
- Add /api/log-feedback endpoint for thumbs up/down feedback
- Update chat route with sessionId tracking and telemetry
- Add feedback buttons (thumbs up/down) to chat messages
- Add sessionId tracking throughout the app
- Update env.example with Langfuse configuration
- Add @langfuse/client, @langfuse/otel, @langfuse/tracing, @opentelemetry/sdk-trace-node
2025-12-05 21:15:02 +09:00
dayuan.jiang
ff6f130f8a refactor: remove Langfuse observability integration
- Delete lib/langfuse.ts, instrumentation.ts
- Remove API routes: log-save, log-feedback
- Remove feedback buttons (thumbs up/down) from chat
- Remove sessionId tracking throughout codebase
- Remove @langfuse/*, @opentelemetry dependencies
- Clean up env.example
2025-12-05 01:30:02 +09:00
Dayuan Jiang
fa1b02ad78 feat: integrate Langfuse for LLM observability (#66)
* feat: integrate Langfuse for LLM observability

- Add instrumentation.ts with Langfuse OpenTelemetry exporter
- Enable experimental telemetry on streamText calls
- Add instrumentationHook to Next.js config
- Install required dependencies (@vercel/otel, langfuse-vercel, etc.)

* feat: add optional Langfuse observability integration

- Add session tracking with unique sessionId per conversation
- Add user tracking via IP address (x-forwarded-for header)
- Make telemetry conditional - only enabled if LANGFUSE_PUBLIC_KEY is set
- Add environment variable validation in instrumentation.ts
- Add sessionId validation (type check + 200 char limit)
- Update env.example with Langfuse configuration docs
- Remove unused langfuse-vercel and @vercel/otel packages

* fix: remove deprecated instrumentationHook (enabled by default in Next.js 15)
2025-12-04 00:23:09 +09:00
dayuan.jiang
45ab934288 feat: add DeepSeek as AI provider
- Install @ai-sdk/deepseek package
- Add DeepSeek provider support to lib/ai-providers.ts
- Add DeepSeek configuration to env.example
- Update README.md with DeepSeek in provider list
- Support both default and custom base URL for DeepSeek
2025-12-02 11:52:09 +09:00
Dan Zheng
d4fb635d98 fix: add customize anthropic baseURL (#28)
* fix: add custom anthropic baseURL

* feat: add baseURL support for all AI providers

- Add GOOGLE_BASE_URL for Google Generative AI
- Add AZURE_BASE_URL for Azure OpenAI
- Add OLLAMA_BASE_URL support (was documented but not implemented)
- Add OPENROUTER_BASE_URL for OpenRouter
- Fix missing semicolon in Anthropic case
- Update env.example with new environment variables

Closes #20

---------

Co-authored-by: dayuan.jiang <jdy.toh@gmail.com>
2025-12-02 01:08:06 +09:00
ylxmf
d2dd501f3f feat: support OpenAI compatible llm 2025-11-21 17:03:47 +08:00
dayuan.jiang
58dcb3c41a feat: add OpenRouter support and fix input disabling
- Add OpenRouter provider support with @openrouter/ai-sdk-provider
- Fix input not disabling during 'submitted' state for fast providers
- Apply disable logic to all interactive elements (textarea, buttons, handlers)
- Clean up env.example by removing model examples and separator blocks
- Upgrade zod to v4.1.12 for compatibility with ollama-ai-provider-v2
- Add debug logging for status changes in chat components
2025-11-15 14:29:18 +09:00
dayuan.jiang
4a3abc2e39 add multiple provider 2025-11-15 13:36:42 +09:00
dayuan.jiang
34de984fb8 docs: update README and add env.example for API key configuration 2025-11-10 19:52:08 +09:00