Fix image inputs and @ file mentions for backend provider#529
Fix image inputs and @ file mentions for backend provider#529ki-cooley wants to merge 44 commits intoglowingjade:mainfrom
Conversation
Implements the backend WebSocket server that: - Connects to Claude API with streaming support - Exposes vault operation tools (read, write, search, list, delete) - Uses bidirectional RPC protocol for vault operations - Supports token-based authentication - Includes Dockerfile for Railway deployment The backend acts as a bridge between the Obsidian plugin and Claude, handling the agent loop and tool execution.
- Add MOCK_MODE environment variable for testing without API key - Add mock agent that simulates Claude responses with tool usage - Add automated test suite with 6 tests covering: - Simple prompt/response streaming - vault_list, vault_search, vault_read, vault_write tools - Ping/pong keepalive - Add interactive test client for manual testing - Update package.json with dev:mock and test scripts All tests pass: 6/6
- Add railway.toml at repo root to point to backend/Dockerfile - Update Dockerfile to build from repo root context - Build TypeScript inside container instead of expecting pre-built dist/
Add backend service for Claude agent
- Add /health and / endpoints that return JSON status - Attach WebSocket server to HTTP server - Update Dockerfile health check to use /health endpoint
- Set buildContext = 'backend' in railway.toml - Fix Dockerfile COPY paths to be relative to backend dir - Update healthcheckPath to /health
Implements additional Claude Agent SDK-like tools for more powerful vault operations: - vault_edit: Precise string replacement in files - vault_grep: Regex search across file contents with line numbers - vault_glob: File pattern matching with glob syntax - vault_rename: Move/rename files with link updates Also adds UI formatting for new tool results with clickable wikilinks. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Loads CLAUDE.md from vault root for project-specific instructions - Loads .claude/instructions.md as alternative location - Loads custom skills from .claude/skills/*.md - Skills can be invoked by name (e.g., "/weekly-review") - Updates system prompt with new tool capabilities Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add model field to PromptMessage protocol - Pass selected model from BackendProvider to WebSocket - Backend agent now uses model from request instead of hardcoded default - Default still falls back to claude-opus-4-5-20250514 if not specified Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- claude-opus-4-5-20251101 (was 20250514) - claude-sonnet-4-5-20250929 - claude-haiku-4-5-20251001 - claude-opus-4-1-20250805 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Collapsed file lists: show first 3 files with "+N more" instead of full lists - Edit batching: suppress duplicate edit messages within 5 second window - Clean up temp test scripts and debug files - Add ConflictManager and instance singleton for backend - Update various UI components for better integration Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Backend: - Enable built-in Anthropic web_search tool for real-time web access - Update system prompt to mention web search capability Frontend: - Add enterKeyHint="send" for mobile keyboard submit button - Blur input after submit to hide mobile keyboard - Add code block truncation (15 lines) with expand/collapse Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Enables the backend to connect to external MCP servers via SSE transport,
discover their tools, and forward tool calls from Claude. This allows
cookbook search tools (and any future MCP servers) to work through the
backend provider, which is required for mobile support.
- New src/mcp-client.ts: MCP client manager using @modelcontextprotocol/sdk
- Modified agent.ts: merges MCP tools with vault tools, routes tool calls
- Modified index.ts: initializes MCP client after server starts (non-blocking)
- Added @modelcontextprotocol/sdk dependency
Configured via MCP_SERVERS env var (JSON):
{"server-name":{"type":"sse","url":"https://example.com/mcp/sse"}}
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace @anthropic-ai/sdk + manual agent loop with @anthropic-ai/claude-agent-sdk. SDK's query() handles tool execution, message history, and MCP connections automatically. Auth now supports CLAUDE_CODE_OAUTH_TOKEN (subscription) or ANTHROPIC_API_KEY. Default model updated to claude-opus-4-6. - Rewrite agent.ts: replace manual streaming loop with SDK query() - New vault-tools.ts: Zod v4 schemas + tool() + createSdkMcpServer() - Delete mcp-client.ts (SDK handles MCP connections internally) - Delete mcp-tools.ts (replaced by vault-tools.ts) - Update index.ts: remove MCP client init, dual auth check - Upgrade zod ^3.23 → ^4.0 (Agent SDK peer dependency) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Agent SDK CLI requires git at runtime. Also set HOME=/tmp/claude-home for writable config directory in Docker, and log full error stacks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…p diag - Add stderr callback to query() to capture CLI subprocess errors - Create /tmp/claude-home/.claude/ in Dockerfile so CLI has writable HOME - Remove slow startup diagnostic that blocked module load Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Agent SDK CLI refuses --dangerously-skip-permissions when running as root. Create a 'claude' user and switch to it before running. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Tool names like "mcp__vault-tools__vault_read" are now cleaned to just "vault_read" before sending to the plugin. This makes the activity accordion show meaningful tool names instead of MCP internals. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The SDK handles external MCP tool execution internally, so we never got tool_end events for tools like search_cookbooks. Now we track pending tool starts and emit synthetic tool_end events when the SDK moves to the next turn (text streaming or new assistant message). This ensures the plugin's activity accordion correctly shows all tool executions with proper start/end lifecycle. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replaces the old tool message display with a modern activity tracking system: - ActivityAccordion: collapsible exploration summary (auto-expand during streaming) - ActivityItem: individual activities with live timers, clickable file links - EditDiffBlock: green/red diff display with Undo button for file changes - EditHistory: singleton snapshot manager for revert functionality - Full support for cookbook search tools (search_cookbooks, list_cookbook_sources) - ActivityEvent type system with 13 activity types + generic fallback - BackendProvider emits structured ActivityEvent objects during streaming - tool-result-formatter parses raw results into structured diff/count/path data - 320 lines of Obsidian-native CSS with theme variable integration - Cleaned up debug logging artifacts from main.ts and settings.ts Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…tocol-level Railway's proxy doesn't forward WebSocket ping/pong frames, causing the server's protocol-level heartbeat to terminate connections after ~45 seconds. Now tracks last message activity time and allows 90s of inactivity. The plugin already sends application-level ping messages every 25s which keep the connection alive through the proxy. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Tell the agent to always include exact page numbers and source names when citing cookbook search results, preventing vague references. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix activity ID mismatch between BackendProvider and VaultRpcHandler that prevented undo from finding the correct snapshot. Track activity IDs from tool_start events and consume them in the RPC handler. - Fix revert for new files: delete file instead of writing empty content. - Add clickable file links in EditDiffBlock that open files in editor. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…MCP restriction - Add externalResourceDir plugin setting for vault-relative PDF path - Fix duplicate activity items by deduplicating editActivities by filePath+type - Map search_cookbooks/list_cookbook_sources/web_search to proper activity types - Remove blanket mobile MCP disable; let transports fail gracefully - Update backend system prompt with source filter and citation format guidance - Settings migration 12→13 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
LLM was stripping [[cookbooks/file.pdf#page=N]] wikilinks and reformatting as plain text. Updated Cookbook Research Tools section to mark wikilink preservation as CRITICAL with explicit examples and instructions. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
BackendProvider: Race message promise against abort signal so clicking Stop immediately wakes the generator and sends cancel to backend. WebSocketClient: Notify active stream handlers on disconnect so generators don't hang forever when the connection drops. server.ts: Send safety-net complete message in finally block so the client never gets stuck waiting for a terminal event. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When an external MCP server dies (e.g. cookbook-rag OOM), the SDK's query() hangs forever waiting for a tool response. This wraps the iterator with a 2-minute timeout that aborts the agent and yields an error if no message arrives. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…, stream thinking tokens - Replace 2-min withMessageTimeout (killed legitimate multi-tool turns) with activity-based approach: heartbeat updated by SDK messages, vault tool handlers, and stderr; 15s interval check; 5-min inactivity threshold - Stream thinking_delta tokens in real time (Cursor-like UX) - Emit tool_start from content_block_start stream events (earlier than waiting for full assistant message), with dedup against assistant fallback - Add heartbeat callback to createVaultMcpServer, called by all 9 vault tool handlers on execution Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Revert early tool_start from content_block_start (sent empty input {},
caused "Cookbook search: ''" in activity UI). Tool_start now emits from
the assistant message which has the complete input parameters.
- Keep thinking_delta streaming from stream events (real-time thinking)
- Add pending tool cleanup in catch block so cancelled requests don't
leave stuck "running" activities in the UI
- Remove unused streamStartedToolIds tracking
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix 2: Track completeSent flag in server.ts to prevent safety-net complete from racing with tool_end events. Delay handler deletion in WebSocketClient by 1s after complete. - Fix 3: Enable extended thinking (budgetTokens: 10000) in SDK query. Add diagnostic logging for thinking block start and deltas. - Fix 4: Increase maxTurns 10→25, inactivity timeout 5→10min, RPC timeout 30→60s. Add result logging for timeout diagnosis. - Add interspersed layout spec doc for future implementation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…on error
- Remove `thinking: { type: 'enabled', budgetTokens: 10000 }` option that
was causing the Agent SDK CLI to fail silently (2s empty response + exit 1)
- Track `completedSuccessfully` flag to suppress the spurious "process exited
with code 1" error the SDK throws after normal completion
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Root cause of empty responses was ANTHROPIC_API_KEY env var on Railway overriding the working CLAUDE_CODE_OAUTH_TOKEN. The thinking option itself is fine. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- maxTurns 25→50 to support complex multi-source research queries - Detect error_max_turns and append truncation notice to response - Add system prompt guidance: batch tool calls 5-8 at a time, present final synthesis instead of repeating findings in multiple formats Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace grouped activity layout (all activities at top, text at bottom) with chronologically interspersed layout — text and activities appear in the order they happen during agent execution, like Cursor. Also restore rendered markdown preview in edit diff blocks with a Raw/Rendered toggle button, defaulting to rendered view. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Strip [[]] from file names in edit diff headers, activity item links, and activity accordion labels. The link styling already distinguishes clickable filenames without needing bracket decoration. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add images field to PromptMessage protocol (both frontend and backend) - Skip file reading/RAG in PromptGenerator for backend provider — pass file paths instead so the agent reads via vault_read - Extract images from request in BackendProvider, send separately from prompt text via WebSocket - Forward images through server.ts to agent.ts, construct multimodal Anthropic content blocks when images are present - Add console.log tracing for image flow debugging Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis PR introduces a complete backend agent system for the Claudsidian plugin, adding a Node.js WebSocket server with RPC-based vault operations, activity tracking UI components, backend LLM provider integration, and rebranding from Smart Composer to Claudsidian across the codebase. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant FrontendClient as Frontend<br/>(Obsidian Plugin)
participant WSServer as WebSocket Server<br/>(backend/server)
participant Agent as Agent Loop<br/>(backend/agent)
participant VaultOps as Vault Operations<br/>(RPC Handler)
User->>FrontendClient: Submit prompt with context
FrontendClient->>WSServer: PromptMessage (token auth)
WSServer->>Agent: runAgent(prompt, bridge)
Agent->>Agent: Build system prompt<br/>(memory, skills, context)
Agent->>Agent: Stream Claude SDK events
Agent->>WSServer: text_delta
WSServer->>FrontendClient: TextDeltaMessage
FrontendClient->>User: Display streaming text
alt Tool Execution
Agent->>Agent: tool_start event
WSServer->>FrontendClient: ToolStartMessage
FrontendClient->>User: Show activity indicator
Agent->>WSServer: RpcRequestMessage<br/>(vault_read/write/etc)
WSServer->>FrontendClient: RpcRequestMessage
FrontendClient->>VaultOps: Execute vault RPC
VaultOps->>VaultOps: File operation<br/>(read/edit/search)
VaultOps->>FrontendClient: RpcResponseMessage (result)
FrontendClient->>WSServer: RpcResponseMessage
Agent->>Agent: Process tool result<br/>tool_end event
WSServer->>FrontendClient: ToolEndMessage
FrontendClient->>User: Update activity,<br/>show diff/result
end
Agent->>Agent: complete event
WSServer->>FrontendClient: CompleteMessage
FrontendClient->>User: Finalize response,<br/>enable apply actions
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/components/chat-view/chat-input/plugins/template/TemplatePlugin.tsx (1)
51-61:⚠️ Potential issue | 🟡 MinorRemove
TemplateTypeaheadOption—it is unused dead code.The class is defined but never referenced anywhere in the codebase and is not exported. The "backwards compatibility" comment is misleading since the class cannot be used by external code. Remove it to reduce confusion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/chat-view/chat-input/plugins/template/TemplatePlugin.tsx` around lines 51 - 61, Remove the dead, unused class TemplateTypeaheadOption (and its leading "Keep old class for backwards compatibility" comment); delete the entire class declaration that extends MenuOption and references Template (constructor, properties name and template) since it's not exported or referenced anywhere—ensure there are no remaining references to TemplateTypeaheadOption and run the typechecker to confirm nothing else depends on it.src/components/chat-view/ToolMessage.tsx (1)
28-49:⚠️ Potential issue | 🟠 MajorFilter backend tool calls in
getToolMessageContentfor consistency with UI rendering.The
ToolMessagecomponent intentionally filters outbackend__tools (lines 62-64) because they "clutter the UI and their results are shown inline as formatted content instead." However,getToolMessageContentincludes all tool calls without filtering. Since this function is used in theCopyButtoncomponent to generate clipboard content, backend tools will leak into the copied text even though they're hidden from the rendered UI. Apply the same!toolCall.request.name.startsWith('backend__')filter to maintain consistency between what users see and what gets copied.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/chat-view/ToolMessage.tsx` around lines 28 - 49, Update getToolMessageContent so it filters out backend tool calls the same way the UI does: before mapping toolCalls, exclude any toolCall whose request.name starts with 'backend__' (i.e., apply a filter like toolCall.request.name.startsWith('backend__') === false). This keeps getToolMessageContent (used by CopyButton) consistent with the ToolMessage rendering and prevents backend__ tool entries from appearing in copied clipboard text; modify the logic that iterates over message.toolCalls in getToolMessageContent to perform this filter first.
🟡 Minor comments (27)
src/components/chat-view/chat-input/plugins/template/TemplatePlugin.tsx-161-171 (1)
161-171:⚠️ Potential issue | 🟡 MinorTemplate branch doesn't handle
nodeToRemove === null, unlike skill and command branches.Both the skill (line 179) and command (line 195) branches have a fallback that appends to root when
nodeToRemoveis null. The template branch silently does nothing in that case. If this edge case can occur, the template content will be lost.Proposed fix for consistency
if (nodeToRemove) { const parent = nodeToRemove.getParentOrThrow() parent.splice(nodeToRemove.getIndexWithinParent(), 1, parsedNodes) const lastNode = parsedNodes[parsedNodes.length - 1] lastNode.selectEnd() + } else { + const root = $getRoot() + for (const node of parsedNodes) { + root.append(node) + } + const lastNode = parsedNodes[parsedNodes.length - 1] + lastNode.selectEnd() }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/chat-view/chat-input/plugins/template/TemplatePlugin.tsx` around lines 161 - 171, The template branch for selectedOption.type === 'template' currently only inserts parsed nodes when nodeToRemove exists, so when nodeToRemove === null the template content is dropped; update the block handling selectedOption.template to mirror the skill/command branches by adding a fallback that, if nodeToRemove is null, appends parsedNodes to the editor root (e.g., root.append(parsedNodes) or equivalent), then selectEnd on the last parsed node; locate symbols selectedOption.template, $parseSerializedNode, nodeToRemove, parent.splice and lastNode.selectEnd to implement the fallback insertion.src/components/chat-view/chat-input/plugins/template/TemplatePlugin.tsx-179-184 (1)
179-184:⚠️ Potential issue | 🟡 MinorAppending a
TextNodedirectly to$getRoot()is invalid in Lexical.The
RootNodeonly acceptsElementNodechildren (e.g.,ParagraphNode).TextNodemust be wrapped in anElementNodefirst. This affects lines 182-183 and similarly lines 197-198.Wrap the text node in a
ParagraphNode:Proposed fix
+import { $createParagraphNode } from 'lexical' ... } else { const root = $getRoot() const textNode = $createTextNode(skillCommand) + const paragraph = $createParagraphNode() + paragraph.append(textNode) + root.append(paragraph) - root.append(textNode) textNode.selectEnd() }(Apply the same fix for the command branch at lines 196-199.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/chat-view/chat-input/plugins/template/TemplatePlugin.tsx` around lines 179 - 184, The code appends a TextNode directly to $getRoot() which is invalid; instead create a ParagraphNode, append the TextNode into that ParagraphNode, then append the ParagraphNode to root. Update the branches that use $createTextNode(skillCommand) and root.append(...) (around the usage of $getRoot(), textNode, and skillCommand in TemplatePlugin.tsx) to wrap the created text node in a ParagraphNode (e.g., new ParagraphNode or ParagraphNode.create()) before appending, and call selectEnd() on the text node as before.backend/.env.example-15-16 (1)
15-16:⚠️ Potential issue | 🟡 MinorComment/value mismatch for
CLAUDE_MODEL.The comment states the default is
claude-sonnet-4-5-20250514, but the example value is set toclaude-opus-4-6. Pick one or clarify intent (e.g., "default in code" vs. "recommended for .env").🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/.env.example` around lines 15 - 16, The comment and example value for CLAUDE_MODEL are inconsistent; update one so they match. Either change the comment to state that the example uses "claude-opus-4-6" (or clarify “recommended .env value”) or change the example value to the documented default "claude-sonnet-4-5-20250514"; specifically edit the CLAUDE_MODEL entry so the comment and the variable value align and clearly indicate whether it’s the default or a recommended override.backend/src/utils.ts-14-19 (1)
14-19:⚠️ Potential issue | 🟡 MinorInvalid
LOG_LEVELvalue silently disables all logging.If
process.env.LOG_LEVELis set to a value not inLOG_LEVELS(e.g.,'verbose'),LOG_LEVELS[currentLogLevel]returnsundefined, and the>=comparison always yieldsfalse, suppressing every log message with no warning.🛡️ Proposed fix — validate at startup
-const currentLogLevel: LogLevel = - (process.env.LOG_LEVEL as LogLevel) || 'info'; +const VALID_LOG_LEVELS = new Set<string>(Object.keys(LOG_LEVELS)); + +const currentLogLevel: LogLevel = (() => { + const envLevel = process.env.LOG_LEVEL; + if (envLevel && VALID_LOG_LEVELS.has(envLevel)) { + return envLevel as LogLevel; + } + if (envLevel) { + console.warn(`[utils] Invalid LOG_LEVEL "${envLevel}", defaulting to "info"`); + } + return 'info'; +})();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/utils.ts` around lines 14 - 19, The code silently disables logging when process.env.LOG_LEVEL is invalid; update the initialization of currentLogLevel and shouldLog to validate process.env.LOG_LEVEL against LOG_LEVELS at startup: check if (process.env.LOG_LEVEL) is a key in LOG_LEVELS, and if not, log a warning (or fallback) and set currentLogLevel to a safe default like 'info'; ensure shouldLog continues to use LOG_LEVELS[currentLogLevel] after validation so comparisons never use undefined. Reference: currentLogLevel, shouldLog, LOG_LEVELS, process.env.LOG_LEVEL.backend/src/utils.ts-62-86 (1)
62-86:⚠️ Potential issue | 🟡 Minor
withRetrythrowsundefinedifmaxAttemptsis 0.If a caller passes
{ maxAttempts: 0 }, the loop body never executes,lastErrorstaysundefined, and line 85 throwsundefined. Consider guarding with a minimum of 1 attempt or throwing a descriptive error.🛡️ Proposed fix
- const { maxAttempts = 3, baseDelayMs = 1000, maxDelayMs = 30000 } = options; + const { maxAttempts: rawAttempts = 3, baseDelayMs = 1000, maxDelayMs = 30000 } = options; + const maxAttempts = Math.max(1, rawAttempts);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/utils.ts` around lines 62 - 86, The function withRetry can throw undefined when callers pass maxAttempts: 0 because lastError remains undefined; fix by enforcing a minimum of 1 attempt or validating inputs: compute const attempts = Math.max(1, options.maxAttempts ?? 3) (or throw a descriptive error if options.maxAttempts < 1), use attempts in the for loop instead of maxAttempts, and ensure the final throw uses a non-undefined Error (e.g., throw lastError ?? new Error("withRetry failed: no attempts were made")) so withRetry never throws undefined; update references to maxAttempts in the function accordingly.src/utils/chat/promptGenerator.ts-274-324 (1)
274-324:⚠️ Potential issue | 🟡 MinorURL mentionables (
@url) are silently dropped in the backend provider path.The non-backend path (lines 412-431) processes
MentionableUrlentries and fetches website content, but the backend branch here doesn't handle them at all. If a user mentions a URL while using the backend provider, that context will be silently lost.Consider either:
- Including URL mentionables in the backend path (e.g., listing them for the agent to fetch), or
- Logging a warning so users know URL mentions aren't supported with the backend provider.
#!/bin/bash # Verify MentionableUrl type usage and whether the backend has any URL handling rg -n 'MentionableUrl' --type=ts -C2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/chat/promptGenerator.ts` around lines 274 - 324, The backend branch in compileUserMessagePrompt (guarded by isBackendProvider()) currently ignores MentionableUrl entries in message.mentionables, dropping URL context; update that branch to collect URL mentionables (filter by m.type === 'url' / MentionableUrl) and either (a) append a URL section to filePrompt (e.g., "## Referenced URLs\n- <url>") so the agent can use vault_read or fetch them, or (b) at minimum emit a warning via console.warn/processLogger indicating URL mentions are not supported for backend providers; ensure you add the new urlPrompt into the returned promptContent.text alongside filePrompt and blockPrompt and keep the imageDataUrls handling unchanged.src/core/backend/tool-result-formatter.ts-41-43 (1)
41-43:⚠️ Potential issue | 🟡 MinorOverly broad error detection with
result.includes('failed').Any tool result containing the word "failed" in normal content (e.g.,
"The test failed to reproduce the bug"from avault_read) will be flagged as an error. Consider checking for a more specific pattern likeresult.startsWith('Error:')only, or matching against known error prefixes from your backend.Proposed fix
const parsed: ParsedToolResult = { - isError: result.startsWith('Error:') || result.includes('failed'), + isError: result.startsWith('Error:'), };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/backend/tool-result-formatter.ts` around lines 41 - 43, The current error detection in tool-result-formatter.ts flags any result containing 'failed', which is too broad; update the logic that sets parsed: ParsedToolResult (the parsed variable) to use stricter error detection — e.g., remove result.includes('failed') and instead only treat strings starting with known error prefixes (keep result.startsWith('Error:') or match a whitelist/regex of backend error prefixes like /^Error:|^Exception:|^\w+Error:/), or implement a helper (e.g., isBackendError) that checks against known error patterns; update references to parsed and the error-detection expression accordingly.src/components/settings/modals/AddEmbeddingModelModal.tsx-41-45 (1)
41-45:⚠️ Potential issue | 🟡 MinorFallback
defaultProvidermay reference a provider not in user's configured list.If
availableProvidersis empty (no embedding-capable providers configured),defaultProviderfalls back toDEFAULT_PROVIDERS[0](OpenAI). The form will initialize with aproviderIdthat doesn't exist inplugin.settings.providers, leading to a confusing initial state. The submit handler does validate provider existence (line 63), so this won't corrupt data, but the user experience could be improved by disabling the form or showing a message when no embedding-capable providers are available.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/settings/modals/AddEmbeddingModelModal.tsx` around lines 41 - 45, The modal currently falls back to DEFAULT_PROVIDERS[0] when availableProviders is empty, which can initialize the form with a providerId not present in plugin.settings.providers; change the logic in AddEmbeddingModelModal so that if availableProviders.length === 0 you do NOT pick DEFAULT_PROVIDERS[0] but instead set defaultProvider to undefined and render a clear disabled state: show a short message/toast that no embedding-capable providers are configured, disable the form fields and submit button (or hide the form), and ensure the component’s initial providerId state handles undefined safely; keep the existing submit-time validation in place but prevent the UX from initializing with an invalid provider.backend/README.md-77-92 (1)
77-92:⚠️ Potential issue | 🟡 MinorAdd a language identifier to the fenced code block.
The ASCII diagram code block on line 77 is missing a language identifier, which triggers a markdownlint warning (MD040). Use
```textfor plain-text/diagram blocks.🔧 Proposed fix
-``` +```text ┌─────────────────────────────────────┐🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/README.md` around lines 77 - 92, The fenced ASCII diagram block is missing a language identifier causing MD040; update the fenced code block (the ASCII diagram starting with the box labeled "WebSocket Server" / "Agent Loop" / "Vault Tools") to use a plain-text language tag by changing the opening triple backticks to triple backticks with "text" (i.e., ```text) so the diagram is treated as plain text and markdownlint stops warning.backend/Dockerfile-26-26 (1)
26-26:⚠️ Potential issue | 🟡 MinorUse
npm prune --omit=devinstead ofnpm prune --production.The
--productionflag is deprecated in npm 7+ and triggers deprecation warnings. Node.js 22 ships with npm 10+, where--omit=devis the recommended approach.Suggested fix
-RUN npm prune --production +RUN npm prune --omit=dev🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/Dockerfile` at line 26, Replace the deprecated npm flag in the Dockerfile RUN step: change the RUN command that currently calls npm prune --production to use the modern npm flag --omit=dev instead (i.e., update the RUN npm prune invocation in the Dockerfile to use --omit=dev so it avoids npm 7+ deprecation warnings).src/components/settings/modals/AddChatModelModal.tsx-36-44 (1)
36-44:⚠️ Potential issue | 🟡 MinorEdge case: empty
availableProvidersleads to a mismatched default.If all configured providers are
backendtype,availableProviderswill be empty and the fallbackDEFAULT_PROVIDERS[0](openai) will be used forformData. The dropdown (line 99) will render no options, but the form will be initialized with the openai provider ID. This is a minor UX issue since users likely always have at least one non-backend provider, but worth noting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/settings/modals/AddChatModelModal.tsx` around lines 36 - 44, availableProviders can be empty causing defaultProvider to fall back to DEFAULT_PROVIDERS[0] while the dropdown has no options; change the initialization so availableProviders is the single source of truth: compute defaultProvider = availableProviders[0] ?? null and initialize formData.providerId to defaultProvider?.id ?? '' (or explicitly to DEFAULT_PROVIDERS[0].id only if you also add that fallback provider into the dropdown options), then update the dropdown render logic to either include the fallback DEFAULT_PROVIDERS[0] when availableProviders is empty or disable the select/submit when formData.providerId is empty; reference availableProviders, defaultProvider, formData, setFormData, plugin.settings.providers and DEFAULT_PROVIDERS in the changes.src/components/chat-view/ActivityAccordion.tsx-97-98 (1)
97-98:⚠️ Potential issue | 🟡 Minor
vault_renamelabel rendersundefinedwhenoldPath/newPathare missing.If
activity.oldPathoractivity.newPathisundefined, the template literal produces"Moved undefined → undefined". Add a fallback:Proposed fix
case 'vault_rename': - return `Moved ${activity.oldPath?.split('/').pop()} → ${activity.newPath?.split('/').pop()}` + return `Moved ${activity.oldPath?.split('/').pop() ?? 'file'} → ${activity.newPath?.split('/').pop() ?? 'file'}`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/chat-view/ActivityAccordion.tsx` around lines 97 - 98, The 'vault_rename' branch in ActivityAccordion.tsx currently builds the label with activity.oldPath?.split('/').pop() and activity.newPath?.split('/').pop(), which yields "undefined" if those properties are missing; update the case 'vault_rename' to guard and provide fallbacks by first checking activity.oldPath and activity.newPath (or using a helper like getBasename) and defaulting to a safe string such as 'unknown' or '' when undefined so the returned string never contains "undefined" (refer to the 'vault_rename' case and activity.oldPath/activity.newPath usages).src/utils/chat/responseGenerator.ts-436-446 (1)
436-446:⚠️ Potential issue | 🟡 MinorUse
??instead of||forstartTimeto avoid falsy-zero pitfall.
startTimeis a numeric timestamp. Using||means a0value would be treated as falsy and fall through tonewActivity.startTime. While0is unlikely as a real timestamp,??(nullish coalescing) is the correct operator for "use existing if defined":Proposed fix
const merged: ActivityEvent = { ...existing, ...newActivity, - startTime: existing.startTime || newActivity.startTime, + startTime: existing.startTime ?? newActivity.startTime,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/chat/responseGenerator.ts` around lines 436 - 446, The startTime assignment in the merged ActivityEvent uses the || operator which treats 0 as falsy; change the expression in the merged object from "startTime: existing.startTime || newActivity.startTime" to use nullish coalescing: "startTime: existing.startTime ?? newActivity.startTime" so existing.startTime is preserved when it is 0 or other defined falsy values; update only the startTime line in the merged object (symbols: merged, startTime, existing, newActivity, ActivityEvent).src/core/backend/ConflictManager.ts-8-8 (1)
8-8:⚠️ Potential issue | 🟡 MinorUnused import:
Notice.
Noticeis imported but never used in this file.Proposed fix
-import { App, TFile, Notice } from 'obsidian'; +import { App, TFile } from 'obsidian';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/backend/ConflictManager.ts` at line 8, The import line in ConflictManager.ts includes an unused symbol Notice; remove Notice from the import statement (keep App and TFile) to eliminate the unused import. Locate the import "import { App, TFile, Notice } from 'obsidian';" in ConflictManager.ts and change it so only the actually used symbols (App, TFile) are imported.backend/test/automated-test.ts-227-233 (1)
227-233:⚠️ Potential issue | 🟡 MinorMissing connection timeout — test can hang indefinitely if the server is unreachable.
The initial WebSocket connection has no timeout. If the server isn't running, the test process will never exit (the
rejecthandler only fires on explicit WebSocket errors, not on indefinite hangs in environments where the OS doesn't time out quickly).Proposed fix
await new Promise<void>((resolve, reject) => { + const connectTimeout = setTimeout(() => { + ws.close(); + reject(new Error('Connection timed out')); + }, 10000); ws.on('open', () => { + clearTimeout(connectTimeout); log('Connected to server'); resolve(); }); - ws.on('error', reject); + ws.on('error', (err) => { + clearTimeout(connectTimeout); + reject(err); + }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/test/automated-test.ts` around lines 227 - 233, The connection Promise that waits for ws 'open' currently has no timeout and can hang; update the Promise in automated-test.ts that wraps ws.on('open')/ws.on('error') to include a configurable timeout (e.g., 5s) that rejects if neither 'open' nor 'error' fires, attach a timer via setTimeout and clear it on 'open' or 'error', and ensure the reject path cleans up listeners (and optionally closes ws) so the test fails fast instead of hanging.styles.css-2088-2098 (1)
2088-2098:⚠️ Potential issue | 🟡 MinorDeprecated
word-break: break-word— flagged by Stylelint at lines 2095, 2241, 2253.The
break-wordvalue forword-breakis deprecated. The same behavior is achieved withoverflow-wrap: break-word. This applies to three locations in the new CSS:
- Line 2095 (
.smtcmp-activity-item-raw pre)- Line 2241 (
.smtcmp-diff-addition pre)- Line 2253 (
.smtcmp-diff-deletion pre)Proposed fix (example for line 2095, apply similarly to lines 2241 and 2253)
pre { margin: 0; white-space: pre-wrap; - word-break: break-word; + overflow-wrap: break-word; font-size: var(--font-smallest); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@styles.css` around lines 2088 - 2098, Replace the deprecated "word-break: break-word" with "overflow-wrap: break-word" in the three selector blocks to restore the same behavior; specifically update the rules inside .smtcmp-activity-item-raw pre, .smtcmp-diff-addition pre, and .smtcmp-diff-deletion pre to use overflow-wrap: break-word (and remove the deprecated word-break declaration).backend/test/test-client.ts-287-343 (1)
287-343:⚠️ Potential issue | 🟡 MinorListener leak:
rl.on('close', …)is registered on every line input.Each time
rl.on('line', …)fires, a newrl.on('close', …)listener is appended (line 340). Over a session this accumulates unbounded listeners. Move it outside the'line'handler or register it once.Proposed fix
rl.on('line', (line) => { const input = line.trim(); // ... prompt handling ... ws.send(JSON.stringify(message)); - - // Don't prompt until response is complete - rl.on('close', () => { - // Cleanup - }); }); + + rl.on('close', () => { + // Cleanup + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/test/test-client.ts` around lines 287 - 343, The 'close' listener is being added inside the rl.on('line', ...) handler causing a listener leak; move the rl.on('close', ...) registration out of the line handler and register it once during setup (near where rl.on('line', ...) is first attached) so it is not re-registered per input, ensuring any cleanup logic that references currentRequestId or ws is executed from that single close handler.backend/test/automated-test.ts-149-151 (1)
149-151:⚠️ Potential issue | 🟡 MinorUnguarded
JSON.parsein message handler can crash the test on malformed data.If the server sends non-JSON data (e.g., a binary WebSocket frame or a malformed message), the test will throw an unhandled exception and exit without a clear diagnostic. Wrap in try/catch and log the error.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/test/automated-test.ts` around lines 149 - 151, The messageHandler currently calls JSON.parse on WebSocket.RawData without protection; wrap the JSON.parse(data.toString()) call inside a try/catch in the messageHandler function to catch malformed or non-JSON frames, log the parsing error (including the raw data or error.message) via the test logger, and return/ignore the message on parse failure so the test doesn't throw an unhandled exception.src/components/chat-view/AssistantToolMessageGroupItem.tsx-128-160 (1)
128-160:⚠️ Potential issue | 🟡 MinorParsed activities use
Date.now()for timestamps.Activities synthesized from content parsing (line 140-141) get
startTime: Date.now()andendTime: Date.now(), which don't reflect actual execution time. This means the live timer inActivityItemwould show "0s" for these. Not a bug since these are static/historical activities, but a0duration display could be confusing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/chat-view/AssistantToolMessageGroupItem.tsx` around lines 128 - 160, The parsed ActivityEvent objects currently set startTime and endTime both to Date.now(), producing a zero duration; change the timestamp assignment in the tool-parsing loop that constructs ActivityEvent (the block creating activity with id `parsed-${type}-${match.index}`) to produce a small non-zero duration (e.g., set endTime = Date.now() and startTime = endTime - a short delta like 1000–5000ms) so ActivityItem’s live timer shows a brief realistic duration for synthesized/historical activities.backend/src/agent.ts-216-219 (1)
216-219:⚠️ Potential issue | 🟡 MinorAbort signal may already be aborted when listener is added.
If the external
signalis already in the aborted state beforeaddEventListeneris called, the callback won't fire andabortControllerwill never abort. Checksignal.abortedupfront.Proposed fix
if (signal) { + if (signal.aborted) { + abortController.abort(); + } signal.addEventListener('abort', () => abortController.abort()); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/agent.ts` around lines 216 - 219, The abort handling may miss already-aborted external signals: after creating abortController (const abortController = new AbortController()) check if the incoming signal variable is present and signal.aborted is true and call abortController.abort() immediately, then still add the signal.addEventListener('abort', () => abortController.abort()) for future aborts; ensure you handle both the immediate aborted state and the listener registration so abortController always reflects the external signal state.src/main.ts-50-72 (1)
50-72:⚠️ Potential issue | 🟡 MinorPotential null dereference of
vaultRpcHandlerafter plugin unload.If an RPC request arrives after
onunload()runs (e.g., a message in-flight during shutdown),this.vaultRpcHandlerwill benulland the!assertion on line 57 will throw. Guard the access or remove the listener on unload.Proposed fix
webSocketClient.on('rpc_request', async (msg: unknown) => { const rpcMsg = msg as RpcRequestMessage try { + if (!this.vaultRpcHandler) { + console.warn('[Claudsidian] RPC request received but handler is null (plugin unloaded?)') + webSocketClient.sendRpcResponse(rpcMsg.id, undefined, { + code: 'RPC_ERROR', + message: 'Plugin is unloading', + }) + return + } + // Look up the activity ID tracked by BackendProvider's onToolStart. // This links the edit snapshot to the correct ActivityEvent for Undo. const activityId = webSocketClient.consumeActivityId(rpcMsg.method, rpcMsg.params) || rpcMsg.id - const result = await this.vaultRpcHandler!.handleRpc( + const result = await this.vaultRpcHandler.handleRpc( rpcMsg.method, rpcMsg.params, activityId, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main.ts` around lines 50 - 72, The RPC handler can access this.vaultRpcHandler after the plugin has been unloaded causing a null dereference; update the webSocketClient.on('rpc_request', ...) handler to first check that this.vaultRpcHandler is present (and return/send an RPC error if not) before calling this.vaultRpcHandler.handleRpc and using consumeActivityId/sendRpcResponse, and also ensure the listener is removed in your onunload cleanup (remove the 'rpc_request' listener or call webSocketClient.off/unsubscribe there) so no in-flight messages hit a null vaultRpcHandler.src/components/chat-view/ActivityItem.tsx-40-65 (1)
40-65:⚠️ Potential issue | 🟡 Minor
FileLinkshould verify the file is aTFilebefore callingopenFile.
getAbstractFileByPathcan return aTFolder. Casting toanyand callingopenFileon a folder will fail silently or throw. Also, theif (app)guard is unnecessary sinceuseApp()already throws if the context is missing.Proposed fix
+import { TFile } from 'obsidian' + function FileLink({ filePath, displayName }: { filePath: string; displayName?: string }) { const app = useApp() const handleClick = useCallback((e: React.MouseEvent) => { e.stopPropagation() - if (app) { - // Open the file in Obsidian - const file = app.vault.getAbstractFileByPath(filePath) - if (file) { - app.workspace.getLeaf().openFile(file as any) - } + const file = app.vault.getAbstractFileByPath(filePath) + if (file instanceof TFile) { + app.workspace.getLeaf(false).openFile(file) } }, [app, filePath])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/chat-view/ActivityItem.tsx` around lines 40 - 65, Remove the unnecessary app null check (useApp throws if missing) and in FileLink::handleClick use getAbstractFileByPath to fetch the file and verify its type is a TFile before calling openFile; specifically, replace the blind cast (file as any) with a runtime type check (instanceof TFile or check file.type === 'file') and only call app.workspace.getLeaf().openFile(file) when the value is a TFile, otherwise do nothing or log a warning.backend/src/server.ts-177-197 (1)
177-197:⚠️ Potential issue | 🟡 MinorError handling sends both
errorandcompleteto the client.When
handlePromptcatches an error (line 177), it sends anerrormessage. Then thefinallyblock (line 189) sends acompletemessage ifcompleteSentis false. The client receives both an error and a complete for the same request. Ensure the client handles this gracefully, or setcompleteSent = trueafter sending the error.Proposed fix
} catch (err) { logger.error(`Error processing prompt ${msg.id}:`, err); this.send({ type: 'error', requestId: msg.id, code: 'AGENT_ERROR', message: err instanceof Error ? err.message : 'Unknown error', }); + completeSent = true; // Don't send redundant complete after error } finally {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/server.ts` around lines 177 - 197, The catch block in the request handler currently sends an 'error' message but leaves completeSent false so the finally block may also emit a 'complete'; update the catch branch (the block that logs and calls this.send with type: 'error'—the same scope where logger.error and this.send are invoked) to set completeSent = true immediately after sending the error (or before) so the finally guard (which sends 'complete' when completeSent is false) will not send a duplicate 'complete' for the same requestId; keep the existing this.activeRequests.delete(msg.id) behavior in finally unchanged.src/components/chat-view/ActivityItem.tsx-76-85 (1)
76-85:⚠️ Potential issue | 🟡 MinorTimer shows 0s briefly for completed activities on mount.
elapsedis initialized to0, and theelse ifbranch that computes the final elapsed only runs after the first render in the effect. For non-running activities with anendTime, consider initializingelapsedtoactivity.endTime - activity.startTimeto avoid the flash.Proposed fix
const [isOpen, setIsOpen] = useState(false) - const [elapsed, setElapsed] = useState(0) + const [elapsed, setElapsed] = useState(() => + activity.endTime + ? activity.endTime - activity.startTime + : activity.status === 'running' + ? Date.now() - activity.startTime + : 0, + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/chat-view/ActivityItem.tsx` around lines 76 - 85, The timer flash happens because elapsed is initialized to 0; update the ActivityItem component to initialize elapsed based on activity: when activity.status === 'running' use Date.now() - activity.startTime, otherwise if activity.endTime exists use activity.endTime - activity.startTime, else 0. Implement this by changing the useState initialization for elapsed to compute the initial value (e.g., via the lazy initializer) so the correct elapsed is present on first render; keep the existing useEffect (which still updates running activities) unchanged.src/core/backend/BackendProvider.ts-442-530 (1)
442-530:⚠️ Potential issue | 🟡 MinorError path yields an error chunk and then throws an exception.
When
onErrorfires, it enqueues a chunk withfinish_reason: 'stop'and setsisComplete = true. After the generator loop exits, Lines 522–530 throw anErrororLLMRateLimitExceededException. The consumer thus receives the error both as a streamed chunk (with error info) and as a thrown exception. Depending on how callers handle the stream, this could result in double error handling or unexpected behavior.Consider either removing the error chunk or not throwing after the loop — pick one signaling mechanism.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/backend/BackendProvider.ts` around lines 442 - 530, The generator currently both enqueues an error chunk in onError (via enqueueChunk) and later throws based on errorState.error, causing duplicate error signaling; fix by introducing a boolean flag (e.g., errorStreamed) initialized false, set it to true inside onError when enqueueChunk is called, and then after the streaming loop only throw the LLMRateLimitExceededException / Error if errorState.error exists AND errorStreamed is false—this preserves the streamed-error contract when onError emitted a chunk and prevents a second thrown error. Ensure references: onError callback, enqueueChunk, errorState.error, and LLMRateLimitExceededException are updated accordingly.src/core/backend/WebSocketClient.ts-89-98 (1)
89-98:⚠️ Potential issue | 🟡 MinorAuth token exposed in WebSocket URL query string.
The token is sent as a URL query parameter (
?token=...). In browser environments, this is the standard workaround since the WebSocket API doesn't support custom headers, but be aware that tokens in URLs may appear in server access logs, CDN/proxy logs, and potentiallyRefererheaders on subsequent HTTP requests from the same page. Ensure server-side logging redacts the query string.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/backend/WebSocketClient.ts` around lines 89 - 98, The code in WebSocketClient that builds wsUrl with `?token=...` exposes the auth token in the URL; instead, change the connection flow in the WebSocketClient class (the constructor/connect method that creates `this.ws = new WebSocket(wsUrl)`) to open a bare WebSocket URL without the token and send the token as the first post-connect message (e.g., a JSON auth handshake) or use a secure subprotocol-based handshake the server supports; update the server to accept and validate that initial auth message and reject connections that do not authenticate, and remove any usage of the token in query strings so logs/CDNs/Referer cannot capture it.src/core/backend/VaultRpcHandler.ts-240-255 (1)
240-255:⚠️ Potential issue | 🟡 MinorReDoS risk: user-supplied regex is compiled and executed without safeguards.
The
patternparameter is compiled directly into aRegExpand then tested against every line of every matching file. A crafted pattern (e.g.,(a+)+b) could cause catastrophic backtracking, freezing the Obsidian UI thread.Consider adding a timeout or complexity limit. A simple mitigation is to wrap the per-line
regex.test()in a timeout, or limit pattern length.🛡️ Minimal mitigation: limit pattern length
// Compile regex let regex: RegExp; try { + if (pattern.length > 200) { + throw new Error('Regex pattern too long (max 200 characters)'); + } regex = new RegExp(pattern, 'gi'); } catch (e) { throw new Error(`Invalid regex pattern: ${pattern}`); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/backend/VaultRpcHandler.ts` around lines 240 - 255, The vaultGrep method compiles user-supplied pattern into a RegExp and runs it across every line (ReDoS risk); mitigate by validating and rejecting dangerous/too-large patterns before compilation: in vaultGrep check pattern length (e.g. max 100 chars) and reject patterns containing nested catastrophic quantifier constructs (detect strings like ")+" or ")+", "(?:" combined with "+"/"*"/"{", or repeated nested quantifiers) and throw a clear error; keep the existing try/catch around new RegExp(pattern, 'gi') but only after the validation, and document the validation in the error message so callers know to simplify the pattern (symbols: vaultGrep, pattern, regex, getMarkdownFiles, GrepResult).
| const BASE_SYSTEM_PROMPT = `You are an Obsidian note-editing assistant. You help users create, edit, search, and organize their notes in their Obsidian vault. | ||
|
|
||
| ## Capabilities | ||
| - Read notes from the vault | ||
| - Write/create notes | ||
| - Edit notes with precise string replacement (vault_edit) | ||
| - Search across the vault (vault_search for text, vault_grep for regex patterns) | ||
| - Find files by pattern (vault_glob) | ||
| - List files and folders | ||
| - Rename/move notes (vault_rename) | ||
| - Delete notes (ask for confirmation first) | ||
| - Search the web for current information (WebSearch) - useful for looking up documentation, news, or any external information | ||
|
|
||
| ## Guidelines | ||
| 1. When editing existing notes, ALWAYS read them first to understand current content | ||
| 2. Use vault_edit for small precise changes - it's more efficient than rewriting the whole file | ||
| 3. Preserve existing formatting and structure unless asked to change it | ||
| 4. Use proper Obsidian markdown: | ||
| - [[wikilinks]] for internal links | ||
| - #tags for categorization | ||
| - YAML frontmatter for metadata | ||
| 5. When creating new notes, suggest appropriate folder locations | ||
| 6. For destructive operations (delete, overwrite), confirm with the user first | ||
| 7. If a search returns no results, suggest alternative search terms or use vault_grep with regex | ||
|
|
||
| ## Cookbook Research Tools | ||
| When the user asks about cooking techniques, recipes, ingredients, or food science: | ||
| - Use search_cookbooks to find information in their cookbook collection | ||
| - ALWAYS include exact citations from the results: source book name, page numbers, and section | ||
| - When the user asks about a specific book, use the \`sources\` parameter to filter: sources="The Professional Chef" | ||
| - For multiple specific sources: sources="ChefSteps, Modernist Cuisine" | ||
| - **CRITICAL: Citation format rules — copy these EXACTLY as they appear in tool results:** | ||
| - PDF citations start with \`[[cookbooks/filename.pdf#page=N]]\` — this is an Obsidian wikilink that opens the PDF to the exact page. You MUST include this exact text in your response for every PDF citation. Example: \`[[cookbooks/CIA professional chef.pdf#page=304]] CIA professional chef, pp. 285-286\` | ||
| - ChefSteps citations use markdown links like \`[ChefSteps: title](https://...)\` — preserve these as-is | ||
| - Do NOT rewrite, summarize, or strip the \`[[...]]\` wikilinks — they are clickable deep links | ||
| - Include multiple sources when available for a comprehensive answer | ||
| - Quote key passages directly when they're particularly informative | ||
|
|
||
| ## Response Style | ||
| - Be concise but helpful | ||
| - Explain what changes you're making | ||
| - When citing cookbook sources, always include the exact page numbers and preserve any links from the tool results | ||
| - If uncertain, ask for clarification | ||
| - When integrating research results, present the final synthesis — avoid restating the same finding in multiple formats within one response | ||
| - For complex multi-topic research, work in batches of 5-8 tool calls at a time rather than launching dozens in parallel. Complete one batch, synthesize results, then proceed to the next batch. | ||
|
|
||
| ## Memory Management | ||
| You have a persistent memory file (.claude/memory.md) loaded into your context. | ||
| - After learning user preferences, project context, or important decisions, use vault_edit or vault_write to update .claude/memory.md | ||
| - Keep it concise (<500 words), organized with ## headings | ||
| - Sections: ## User Preferences, ## Projects, ## Key Decisions, ## Conventions | ||
| - Don't store conversation-specific details — only persistent knowledge`; |
There was a problem hiding this comment.
System prompt contains cookbook-specific instructions.
The BASE_SYSTEM_PROMPT includes a "Cookbook Research Tools" section (lines 42-53) with domain-specific instructions for a cooking-related vault. This is hardcoded into the agent and will appear for all users regardless of their vault content. Consider making this configurable or loading it from the vault (similar to how CLAUDE.md is used for vault-specific instructions).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/agent.ts` around lines 17 - 68, The BASE_SYSTEM_PROMPT constant
embeds cookbook-specific guidance that will apply to every user; extract that
cookbook section out of BASE_SYSTEM_PROMPT and make the system prompt composable
by adding a loader function (e.g., buildSystemPrompt or getBaseSystemPrompt)
that reads optional vault-specific instructions (like CLAUDE.md does) or a
dedicated cookbook file and appends them only when present; update usages that
reference BASE_SYSTEM_PROMPT to call the new builder so cookbook rules are
injected conditionally rather than hardcoded.
| logger.info(` PORT: ${process.env.PORT || 3001}`); | ||
| logger.info(` AUTH_TOKEN: ${process.env.AUTH_TOKEN ? '***' : 'dev-token (default)'}`); | ||
| if (!MOCK_MODE) { | ||
| logger.info(` CLAUDE_MODEL: ${process.env.CLAUDE_MODEL || 'claude-opus-4-6'}`); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Anthropic Claude model names 2025
💡 Result:
Anthropic Claude model names that debuted in 2025 (with their Claude API model IDs):
- Claude Sonnet 3.7 (Feb 2025) —
claude-3-7-sonnet-20250219(alias:claude-3-7-sonnet-latest) [1][2] - Claude Sonnet 4 (May 2025) —
claude-sonnet-4-20250514(alsoclaude-sonnet-4-0) [1][2] - Claude Opus 4 (May 2025) —
claude-opus-4-20250514(alsoclaude-opus-4-0) [1][2] - Claude Opus 4.1 (Aug 2025) —
claude-opus-4-1-20250805[1][2] - Claude Sonnet 4.5 (Sep 2025) —
claude-sonnet-4-5-20250929[1][2] - Claude Haiku 4.5 (Oct 2025) —
claude-haiku-4-5-20251001[1][2] - Claude Opus 4.5 (Nov 2025) —
claude-opus-4-5-20251101[1][3]
Sources: [1] [2] [3]
🏁 Script executed:
cat -n backend/src/index.ts | head -50Repository: glowingjade/obsidian-smart-composer
Length of output: 2033
Replace invalid default model name claude-opus-4-6 with a valid Anthropic Claude model identifier.
The default model string claude-opus-4-6 does not match any known Anthropic Claude model. Valid 2025 models include claude-opus-4-20250514, claude-opus-4-1-20250805, or claude-opus-4-5-20251101. Using an invalid model name will cause API failures when the CLAUDE_MODEL environment variable is not set.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/index.ts` at line 33, The logged default for CLAUDE_MODEL is
using an invalid model name; update the default used in logger.info (the
expression reading process.env.CLAUDE_MODEL || 'claude-opus-4-6') to a valid
Anthropic Claude identifier such as 'claude-opus-4-20250514' (or another valid
2025 model like 'claude-opus-4-1-20250805'/'claude-opus-4-5-20251101') so that
the app won’t attempt to call the API with a nonexistent model when
process.env.CLAUDE_MODEL is unset.
| GrepResult, | ||
| } from './protocol.js'; | ||
|
|
||
| const AUTH_TOKEN = process.env.AUTH_TOKEN || 'dev-token'; |
There was a problem hiding this comment.
Hardcoded fallback 'dev-token' for AUTH_TOKEN.
If AUTH_TOKEN is not set in production, any client can connect with 'dev-token'. Consider logging a warning at startup when the env var is unset, or refusing to start without it in non-development environments.
Proposed safeguard
-const AUTH_TOKEN = process.env.AUTH_TOKEN || 'dev-token';
+const AUTH_TOKEN = process.env.AUTH_TOKEN;
+if (!AUTH_TOKEN) {
+ if (process.env.NODE_ENV === 'production') {
+ throw new Error('AUTH_TOKEN environment variable is required in production');
+ }
+ logger.warn('AUTH_TOKEN not set — using insecure default "dev-token"');
+}
+const effectiveAuthToken = AUTH_TOKEN || 'dev-token';Then use effectiveAuthToken on line 362.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/server.ts` at line 29, Replace the hardcoded fallback by deriving
an effectiveAuthToken: read process.env.AUTH_TOKEN into AUTH_TOKEN, compute
effectiveAuthToken = AUTH_TOKEN || (process.env.NODE_ENV === 'development' ?
'dev-token' : undefined), and if effectiveAuthToken is undefined and NODE_ENV
!== 'development' log an error and exit (or throw) at startup; if AUTH_TOKEN was
unset but NODE_ENV === 'development' log a clear startup warning. Then update
places that use AUTH_TOKEN (e.g., the later usage referenced as
effectiveAuthToken) to use the new effectiveAuthToken variable so production
never accepts the default dev token.
| "id": "claudsidian", | ||
| "name": "Claudsidian", |
There was a problem hiding this comment.
Changing the plugin id is a breaking change for existing users.
Obsidian uses the manifest id to locate the plugin's data folder (<vault>/.obsidian/plugins/<id>/), settings, and for community plugin registry lookups. Renaming from "smart-composer" to "claudsidian" means:
- Existing installations won't receive this as an update — Obsidian will treat it as an entirely different plugin.
- User settings, data, and configuration stored under the old
smart-composerfolder will be orphaned. - If this is a fork intended for separate distribution, this is fine — but it should not be merged into the upstream
glowingjade/obsidian-smart-composerrepo without a migration strategy.
Please confirm whether this is intentional (standalone fork) or if it should retain the original id for backward compatibility.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@manifest.json` around lines 2 - 3, The manifest "id" was changed from
"smart-composer" to "claudsidian", which breaks upgrades and orphan settings;
either revert the manifest "id" back to "smart-composer" to preserve existing
users' plugin folder and settings, or if this is an intentional standalone fork,
keep "claudsidian" but add a clear migration path: implement a one-time
migration routine that looks for settings/data under the old "smart-composer"
folder and moves/merges them into the new "claudsidian" location (or document
the fork decision in the README and release notes) — update the manifest "id"
key and add the migration code or documentation accordingly.
| const { explorationActivities, editActivities } = useMemo(() => { | ||
| const exploration: ActivityEvent[] = [] | ||
| const edits: ActivityEvent[] = [] | ||
|
|
||
| for (const activity of activities) { | ||
| if ( | ||
| activity.type === 'vault_write' || | ||
| activity.type === 'vault_edit' || | ||
| activity.type === 'vault_rename' || | ||
| activity.type === 'vault_delete' | ||
| ) { | ||
| edits.push(activity) | ||
| } else { | ||
| exploration.push(activity) | ||
| } | ||
| } | ||
|
|
||
| return { explorationActivities: exploration, editActivities: edits } | ||
| }, [activities]) | ||
|
|
||
| if (activities.length === 0) { | ||
| return null | ||
| } | ||
|
|
||
| return ( | ||
| <div className="smtcmp-activity-accordion"> | ||
| {/* Exploration section */} | ||
| {explorationActivities.length > 0 && ( | ||
| <div className="smtcmp-activity-section"> | ||
| <div | ||
| className="smtcmp-activity-accordion-header" | ||
| onClick={() => setIsOpen(!isOpen)} | ||
| > | ||
| <span className="smtcmp-activity-accordion-icon"> | ||
| {isOpen ? <ChevronDown size={14} /> : <ChevronRight size={14} />} | ||
| </span> | ||
| <span className="smtcmp-activity-accordion-title"> | ||
| {isStreaming ? 'Exploring...' : `Explored ${summary}`} | ||
| </span> | ||
| </div> | ||
|
|
||
| {isOpen && ( | ||
| <div className="smtcmp-activity-accordion-content"> | ||
| {explorationActivities.map((activity) => ( | ||
| <ActivityItem | ||
| key={activity.id} | ||
| activity={activity} | ||
| label={getActivityLabel(activity)} | ||
| icon={getActivityIcon(activity.type)} | ||
| /> | ||
| ))} | ||
| </div> | ||
| )} | ||
| </div> | ||
| )} | ||
| </div> | ||
| ) |
There was a problem hiding this comment.
editActivities is computed but never rendered.
The useMemo at line 185 splits activities into explorationActivities and editActivities, but only the exploration section is rendered (lines 212–239). Edit activities (vault_write, vault_edit, vault_rename, vault_delete) will silently disappear from the accordion. This is either a bug or an incomplete feature.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/chat-view/ActivityAccordion.tsx` around lines 185 - 241, The
code splits activities into explorationActivities and editActivities in the
useMemo but never renders editActivities; update the component to render a
separate "Edits" section (similar to the exploration section) that maps
editActivities to ActivityItem using the same props (key={activity.id},
activity, label={getActivityLabel(activity)},
icon={getActivityIcon(activity.type)}), include a header and toggle (you can
reuse isOpen/setIsOpen or introduce a new state like isEditsOpen for independent
collapse), and show a suitable title (e.g., "Edits" or count/summary) while
preserving existing behavior for isStreaming and summary for the exploration
section.
| name="Support Claudsidian" | ||
| desc="If you find Claudsidian valuable, consider supporting its development!" | ||
| heading | ||
| className="smtcmp-settings-support-smart-composer" | ||
| > | ||
| <ObsidianButton | ||
| text="Buy Me a Coffee" | ||
| text="View on GitHub" | ||
| onClick={() => | ||
| window.open('https://www.buymeacoffee.com/kevin.on', '_blank') | ||
| window.open('https://github.com/ki-cooley/claudsidian', '_blank') | ||
| } | ||
| cta | ||
| /> |
There was a problem hiding this comment.
Replacing the original author's funding link with a fork's GitHub URL.
This change removes the original author's (Heesu Suh / glowingjade) "Buy Me a Coffee" donation link and replaces it with a GitHub link to ki-cooley/claudsidian. If this PR is merged into the upstream repository, it would remove the original maintainer's funding mechanism.
If this is a standalone fork, this is fine. If targeting upstream, the original funding link should be preserved or the change discussed with the original author.
Additionally, the URL has the same fork-vs-upstream concern flagged in ErrorModal.tsx.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/settings/SettingsTabRoot.tsx` around lines 24 - 35, This
change replaces the original maintainer's donation link with a fork's GitHub URL
in the SettingsTabRoot component (see the ObsidianButton rendering), which would
remove the upstream author's funding link if merged; either restore the original
"Buy Me a Coffee" donation link (crediting Heesu Suh / glowingjade) in
SettingsTabRoot's ObsidianButton, or make the funding link
configurable/conditional for forks (e.g., via a build-time or runtime config)
and apply the same fork-vs-upstream treatment to ErrorModal.tsx so both
locations behave consistently; if targeting upstream, discuss and get consent
from the original author before replacing their funding link.
| export const DEFAULT_CHAT_MODEL_ID = 'backend-agent' | ||
| export const DEFAULT_APPLY_MODEL_ID = 'backend-agent' |
There was a problem hiding this comment.
Changing defaults to backend-agent will break the out-of-box experience for users without the backend.
Both DEFAULT_CHAT_MODEL_ID and DEFAULT_APPLY_MODEL_ID now point to backend-agent, which requires a running backend server and configured authToken/backendUrl. New installs and users who haven't set up the backend will get immediate errors when trying to chat or apply.
If this is intentional for the Claudsidian fork, consider at minimum adding a clear onboarding prompt or fallback when the backend is unreachable. If this PR targets the upstream Smart Composer repo, these defaults should remain as the original API-based models.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/constants.ts` around lines 11 - 12, DEFAULT_CHAT_MODEL_ID and
DEFAULT_APPLY_MODEL_ID were changed to 'backend-agent', which breaks new
installs without a configured backend; either revert these constants to the
upstream API-based default model IDs (restore the original API model strings for
DEFAULT_CHAT_MODEL_ID and DEFAULT_APPLY_MODEL_ID) or add a runtime fallback:
keep the constants as-is only when a valid backend is configured (check
authToken and backendUrl) and otherwise select the upstream API model IDs, plus
surface a clear onboarding message when backend is unreachable so users know to
configure authToken/backendUrl before using the backend-agent.
| { | ||
| type: 'backend', | ||
| id: PROVIDER_TYPES_INFO.backend.defaultProviderId, | ||
| backendUrl: '', | ||
| authToken: '', | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's understand the file structure and find migration-related files
git ls-files | grep -E '(migration|schema|setting)' | head -20Repository: glowingjade/obsidian-smart-composer
Length of output: 1097
🏁 Script executed:
# Look at the relevant lines in src/constants.ts
cat -n src/constants.ts | sed -n '170,250p'Repository: glowingjade/obsidian-smart-composer
Length of output: 2604
🏁 Script executed:
# Search for migration files with more specific patterns
fd -t f -e ts | xargs grep -l "schema.*version\|migration" | head -15Repository: glowingjade/obsidian-smart-composer
Length of output: 629
🏁 Script executed:
# Find all migration files
find src/settings/schema/migrations -type f -name "*.ts" | sortRepository: glowingjade/obsidian-smart-composer
Length of output: 1264
🏁 Script executed:
# Check schema version definition
rg -n "version|schema" src/settings/schema -t ts -A 2 | head -50Repository: glowingjade/obsidian-smart-composer
Length of output: 2998
🏁 Script executed:
# Look for backend references in migration files
rg -l "backend" src/settings/schema/migrationsRepository: glowingjade/obsidian-smart-composer
Length of output: 61
🏁 Script executed:
# Check the latest migration file content
cat src/settings/schema/migrations/11_to_12.tsRepository: glowingjade/obsidian-smart-composer
Length of output: 4579
🏁 Script executed:
# Check the 12_to_13.ts migration file
cat src/settings/schema/migrations/12_to_13.tsRepository: glowingjade/obsidian-smart-composer
Length of output: 493
🏁 Script executed:
# Find the current SETTINGS_SCHEMA_VERSION
rg -n "SETTINGS_SCHEMA_VERSION" src/settings/schema --type ts -B 2 -A 2Repository: glowingjade/obsidian-smart-composer
Length of output: 3172
🏁 Script executed:
# Check if there are any migrations beyond 12_to_13
find src/settings/schema/migrations -type f -name "*_to_*.ts" | sortRepository: glowingjade/obsidian-smart-composer
Length of output: 1175
🏁 Script executed:
# Search for backend references in all migration files
rg -n "backend" src/settings/schema/migrations --type tsRepository: glowingjade/obsidian-smart-composer
Length of output: 61
🏁 Script executed:
# Check the migrations index to see which migrations are exported/applied
cat -n src/settings/schema/migrations/index.tsRepository: glowingjade/obsidian-smart-composer
Length of output: 2416
Settings migration required for new backend provider and model.
Per the comments on lines 177–179 and 237–238, adding new default providers/models requires a settings migration. The new backend provider (lines 227–232) and backend-agent model (lines 241–246) have been added to constants.ts but no migration exists to include them. The current schema version is 13, and the 12_to_13 migration only adds externalResourceDir. Create a new migration (13_to_14 or equivalent) to add these entries, otherwise existing users won't receive them on upgrade.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/constants.ts` around lines 227 - 232, Existing users need a settings
migration to add the new default "backend" provider and "backend-agent" model;
create a new migration function (bump schema from 13 to 14) that updates stored
settings to include the provider entry matching the constants.ts block (type
'backend', id = PROVIDER_TYPES_INFO.backend.defaultProviderId, backendUrl '',
authToken '') and the new model entry for 'backend-agent' so existing installs
receive them. Locate the app's migration registry (where 12_to_13 is
implemented) and add a 13_to_14 migration that safely merges these defaults into
providers/models arrays only if they are missing, and increment the schema
version to 14.
| export class ConflictManager { | ||
| private versions = new Map<string, FileVersion>(); | ||
|
|
||
| constructor(private app: App) { | ||
| // Watch for external file modifications | ||
| this.app.vault.on('modify', (file) => { | ||
| if (file instanceof TFile && this.versions.has(file.path)) { | ||
| this.handleExternalModify(file); | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
Missing cleanup for vault event listener — resource leak on plugin reload.
The vault.on('modify', …) listener registered in the constructor is never unregistered. If the ConflictManager is recreated (e.g., plugin disable/enable cycle), stale listeners accumulate. Add a destroy() or cleanup() method and call it from the plugin's onunload.
Proposed fix
export class ConflictManager {
private versions = new Map<string, FileVersion>();
+ private modifyHandler: (file: any) => void;
constructor(private app: App) {
- this.app.vault.on('modify', (file) => {
+ this.modifyHandler = (file) => {
if (file instanceof TFile && this.versions.has(file.path)) {
this.handleExternalModify(file);
}
- });
+ };
+ this.app.vault.on('modify', this.modifyHandler);
+ }
+
+ destroy(): void {
+ this.app.vault.off('modify', this.modifyHandler);
+ this.versions.clear();
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export class ConflictManager { | |
| private versions = new Map<string, FileVersion>(); | |
| constructor(private app: App) { | |
| // Watch for external file modifications | |
| this.app.vault.on('modify', (file) => { | |
| if (file instanceof TFile && this.versions.has(file.path)) { | |
| this.handleExternalModify(file); | |
| } | |
| }); | |
| } | |
| export class ConflictManager { | |
| private versions = new Map<string, FileVersion>(); | |
| private modifyHandler: (file: any) => void; | |
| constructor(private app: App) { | |
| // Watch for external file modifications | |
| this.modifyHandler = (file) => { | |
| if (file instanceof TFile && this.versions.has(file.path)) { | |
| this.handleExternalModify(file); | |
| } | |
| }; | |
| this.app.vault.on('modify', this.modifyHandler); | |
| } | |
| destroy(): void { | |
| this.app.vault.off('modify', this.modifyHandler); | |
| this.versions.clear(); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/core/backend/ConflictManager.ts` around lines 25 - 35, The constructor of
ConflictManager registers a vault 'modify' listener but never removes it,
causing leaked listeners on plugin reloads; add a public cleanup/destroy method
on ConflictManager that removes the listener registered in the constructor (undo
the this.app.vault.on('modify', ...) with the corresponding
this.app.vault.off('modify', ...) using the same callback reference) and ensure
the plugin calls that cleanup from its onunload to prevent stale listeners;
reference the ConflictManager class, its constructor, the handleExternalModify
callback, and the new destroy/cleanup method when implementing.
| private hashContent(content: string): string { | ||
| let hash = 0; | ||
| for (let i = 0; i < content.length; i++) { | ||
| const char = content.charCodeAt(i); | ||
| hash = (hash << 5) - hash + char; | ||
| hash = hash & hash; // Convert to 32-bit integer | ||
| } | ||
| return hash.toString(36); | ||
| } |
There was a problem hiding this comment.
Weak hash function risks silent data loss through false-negative conflict detection.
This 32-bit djb2 hash has a meaningful collision probability, especially as content size grows. A hash collision causes checkConflict to report "no conflict" when the file has actually changed, leading to a silent overwrite. For a safety-critical conflict detector, consider using a cryptographic hash (SHA-256) or at minimum a longer non-cryptographic hash.
Also, hash & hash on line 118 is a no-op — the intent is likely hash |= 0 or simply relying on the bitwise shift already truncating to 32 bits.
Proposed fix using Web Crypto (available in Obsidian/Electron)
- private hashContent(content: string): string {
- let hash = 0;
- for (let i = 0; i < content.length; i++) {
- const char = content.charCodeAt(i);
- hash = (hash << 5) - hash + char;
- hash = hash & hash;
- }
- return hash.toString(36);
- }
+ private hashContent(content: string): string {
+ // Simple but longer hash to reduce collision risk
+ // For a synchronous approach, use two independent 32-bit hashes
+ let h1 = 0x811c9dc5; // FNV offset basis
+ let h2 = 0;
+ for (let i = 0; i < content.length; i++) {
+ const char = content.charCodeAt(i);
+ h1 = Math.imul(h1 ^ char, 0x01000193); // FNV-1a
+ h2 = (h2 << 5) - h2 + char;
+ h2 = h2 | 0;
+ }
+ return (h1 >>> 0).toString(36) + '-' + (h2 >>> 0).toString(36);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private hashContent(content: string): string { | |
| let hash = 0; | |
| for (let i = 0; i < content.length; i++) { | |
| const char = content.charCodeAt(i); | |
| hash = (hash << 5) - hash + char; | |
| hash = hash & hash; // Convert to 32-bit integer | |
| } | |
| return hash.toString(36); | |
| } | |
| private hashContent(content: string): string { | |
| // Simple but longer hash to reduce collision risk | |
| // For a synchronous approach, use two independent 32-bit hashes | |
| let h1 = 0x811c9dc5; // FNV offset basis | |
| let h2 = 0; | |
| for (let i = 0; i < content.length; i++) { | |
| const char = content.charCodeAt(i); | |
| h1 = Math.imul(h1 ^ char, 0x01000193); // FNV-1a | |
| h2 = (h2 << 5) - h2 + char; | |
| h2 = h2 | 0; | |
| } | |
| return (h1 >>> 0).toString(36) + '-' + (h2 >>> 0).toString(36); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/core/backend/ConflictManager.ts` around lines 113 - 121, The current
hashContent function uses a weak 32-bit DJB2-like hash and contains a no-op mask
(hash & hash); replace it with a robust SHA-256 implementation and update
callers accordingly (e.g., checkConflict) so collisions are negligible:
implement hashContent to compute a SHA-256 digest (using Web Crypto's
crypto.subtle.digest or Node's crypto.createHash('sha256')), encode the result
as hex/base64 string, remove the erroneous "hash & hash" logic, and if using
crypto.subtle make hashContent async and propagate async/await to any callers
such as checkConflict to await the new hash result. Ensure all places that
compare hashContent outputs are updated to handle the new string format and
async behavior.
Summary
BackendProvider, send them separately over WebSocket, and construct proper Anthropic multimodal content blocks inagent.tsso Claude receives actual images instead of raw JSON with base64 stringsPromptGeneratorwhen using the backend provider — pass file paths instead so the agent reads them viavault_read, avoiding crashes from unconfigured embedding system[ImageFlow]console.log instrumentation at key points to diagnose image persistence issuesChanges
src/core/backend/protocol.tsimagesfield toPromptMessagebackend/src/protocol.tsimagesfieldsrc/utils/chat/promptGenerator.tsisBackendProvider(), skip RAG for backend, pass file pathssrc/core/backend/BackendProvider.tsextractPromptAndImages()replaces old prompt conversionsrc/core/backend/WebSocketClient.tsimagesparam insendPrompt()backend/src/server.tsmsg.imagestorunAgent()backend/src/agent.tssrc/components/chat-view/Chat.tsxsrc/components/chat-view/chat-input/ChatUserInput.tsxTest plan
npm run build)vault_read[ImageFlow]logs)🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
New Features
UI/UX Improvements
Branding