* refactor: simplify LLM XML format to output bare mxCells only
- Update wrapWithMxFile() to always add root cells (id=0, id=1) automatically
- LLM now generates only mxCell elements starting from id=2 (no wrapper tags)
- Update system prompts and tool descriptions with new format instructions
- Update cached responses to remove root cells and wrapper tags
- Update truncation detection to check for complete mxCell endings
- Update documentation in xml_guide.md
* fix: address PR review issues for XML format refactor
- Fix critical bug: inconsistent truncation check using old </root> pattern
- Fix stale error message referencing </root> tag
- Add isMxCellXmlComplete() helper for consistent truncation detection
- Improve regex patterns to handle any attribute order in root cells
- Update wrapWithMxFile JSDoc to document root cell removal behavior
* fix: handle non-self-closing root cells in wrapWithMxFile regex
- Add client-side PDF text extraction using unpdf library
- Support text files (.txt, .md, .json, .csv, .py, .js, .ts, etc.)
- Add file preview with character count for PDF/text files
- Add 150k character limit for extracted content
- Highlight Paper to Diagram example with NEW badge
- Fix React hydration error by adding explicit IDs to ResizablePanelGroup
- Remove code duplication by centralizing file utilities in pdf-utils.ts
- Add Biome as formatter and linter (replaces Prettier)
- Configure Husky + lint-staged for pre-commit hooks
- Add VS Code settings for format on save
- Ignore components/ui/ (shadcn generated code)
- Remove semicolons, use 4-space indent
- Reformat all files to new style
- Extract system prompts to dedicated lib/system-prompts.ts module
- Add extended system prompt (~4000 tokens) for models with higher cache minimums (Opus 4.5, Haiku 4.5)
- Clean up debug logs while preserving informational and cache-related logs
- Improve code formatting and organization in chat route
- Add lib/cached-responses.ts with pre-generated XML for 4 example prompts
- Modify chat API route to check cache before calling AI
- Cache returns instant response (~0.26s) vs AI generation (~20-25s)
- Add "(cached for instant response)" text to example panel
- Cache only activates for first message with empty diagram