Add granular connector caching #1552
Conversation
…r names and improve operation name management
…n creation for projects, and enhance AIChatFooter tooltip alignment
…cessing logic for connector summaries and details
📝 WalkthroughWalkthroughAdds a runtime-versioned connector catalog and multi-source caching/resolution for MI agent mode, refactors connector tooling to single-name lookups with richer metadata, updates inbound connector context, injects connector guidance into prompts, adds a sync script, and surfaces store status/warnings across startup/session flows. Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as AI Agent
participant Tool as Connector Tool
participant Cache as Cache Layer
participant Store as Remote Store
participant LocalDB as Local DB
Agent->>Tool: Request connector definition (name)
Tool->>Cache: Check runtime-versioned fresh-cache
alt Fresh cache hit
Cache-->>Tool: Return definition (source: fresh-cache)
else Miss
Tool->>Store: Fetch catalog/details with timeout
alt Store success
Store-->>Tool: Return definition (source: store)
Tool->>Cache: Write fresh-cache entries
else Store failure/timeout
Tool->>Cache: Check stale-cache
alt Stale-cache hit
Cache-->>Tool: Return stale definition (source: stale-cache)
Tool->>Tool: Mark storeStatus = degraded, add warning
else Stale miss
Tool->>LocalDB: Read local fallback definitions
LocalDB-->>Tool: Return fallback definition (source: local-db)
Tool->>Tool: Mark storeStatus = degraded + fallbackUsed
end
end
end
Tool->>Agent: Return ConnectorDefinitionLookupResult (definitions, missingNames, fallbackUsedNames, storeFailureNames, warnings, runtimeVersionUsed)
sequenceDiagram
participant Init as Project Init
participant RPC as RPC Manager
participant Session as Startup Session Tracker
participant ChatMgr as Chat History Manager
participant Tool as Connector Tool
Init->>RPC: loadChatHistory(projectUri)
alt First load for project
RPC->>Session: create startup session marker
Session-->>RPC: marker set
RPC->>ChatMgr: initialize history with startup session
ChatMgr->>Tool: getAvailableConnectorCatalog(projectPath)
Tool-->>ChatMgr: catalog (storeStatus, warnings, runtimeVersion)
ChatMgr-->>RPC: history loaded with connector context
else Subsequent loads
RPC->>ChatMgr: load history (skip startup init)
ChatMgr-->>RPC: history returned
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This pull request implements granular connector caching to improve the reliability and performance of connector store operations. The changes introduce per-connector/inbound definition caching instead of bulk catalog caching, enabling better handling of connector store outages and more granular cache invalidation.
Changes:
- Refactored connector store caching architecture to use granular per-definition caching with separate catalog and definition files organized by runtime version in
~/.wso2-mi/copilot/cache - Updated connector tools API to fetch one connector/inbound per call with optional targeted parameter details, improving efficiency and reducing unnecessary data transfer
- Enhanced error handling and fallback strategies for connector store unavailability, including stale cache usage and local fallback definitions
- Added connector documentation to system prompt and updated guidance on connector initialization patterns
Reviewed changes
Copilot reviewed 14 out of 15 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| connector_store_cache.ts | Complete rewrite implementing granular caching with catalog summaries and per-definition cache files, including sophisticated fallback strategies |
| connector_tools.ts | Refactored from multi-connector batch fetching to single-item API with targeted parameter retrieval and parameter availability checking |
| project_tools.ts | Updated to use new granular lookup API with enhanced error messages for store failures and fallback usage tracking |
| inbound_db.ts | Updated inbound endpoint definitions with new versions, added File Event Listener and Google PubSub entries, updated RabbitMQ entry (incomplete) |
| connectors_guide.ts | Added comprehensive connector initialization guidance organized by pattern with clearer examples and references to tool names |
| rpc-manager.ts | Added startup session initialization logic to create fresh session on first panel open per project per extension-host lifetime |
| chat-history-manager.ts | Updated SESSION_STORAGE_VERSION from integer 1 to float 1.0 |
| system.ts | Added CONNECTOR_DOCUMENTATION to system prompt and updated connector tool usage guidance |
| prompt.ts | Added connector store status warnings to user prompt reminders |
| agent.ts | Updated connector tool display input to reflect new single-item API parameters |
| tool-action-mapper.ts | Enhanced connector tool action mapper to use specific connector name in loading/completion messages |
| update-connector-context-db.js | New Node.js script to fetch and update connector/inbound definitions from connector store API |
| .env.example | Added new environment variables for connector store summary and details filter endpoints |
| AIChatFooter.tsx | Added align="start" prop to web access tooltip for improved UI alignment |
Comments suppressed due to low confidence (1)
workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts:92
- The cache uses a shared global cache directory (~/.wso2-mi/copilot/cache) that is not project-specific, while per-project storage is used elsewhere in the codebase (see getCopilotProjectStorageDir in storage-paths.ts). This creates an inconsistency: if two projects use different runtime versions or if cache needs to be cleared for one project, the shared cache affects all projects. Consider whether connector store cache should also be project-specific to maintain consistency with other project storage patterns and to allow better isolation between projects.
const CACHE_ROOT_DIR = path.join(os.homedir(), '.wso2-mi', 'copilot', 'cache');
const CATALOG_FILE_NAME = 'catalog.json';
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
workspaces/mi/mi-extension/scripts/update-connector-context-db.js
Outdated
Show resolved
Hide resolved
workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts
Outdated
Show resolved
Hide resolved
workspaces/mi/mi-extension/src/rpc-managers/agent-mode/rpc-manager.ts
Outdated
Show resolved
Hide resolved
workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts
Show resolved
Hide resolved
workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts
Outdated
Show resolved
Hide resolved
workspaces/mi/mi-extension/src/ai-features/agent-mode/context/inbound_db.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts (1)
312-318:⚠️ Potential issue | 🟡 MinorValidate that
version.tagNameis a string before constructingDependencyDetails.The
DependencyDetailsinterface requiresversion: string, but the code assignsitem.version?.tagNamewithout validation. If the resolved definition lacks a version or tagName, an undefined value will be passed to a required string field, violating TypeScript's strict type checking. This could cause runtime issues whenupdateAiDependenciesprocesses the dependency.Add validation for
version.tagNamealongside the existing Maven coordinate check at line 283:🛡️ Proposed fix
- if (typeof item?.mavenGroupId !== 'string' || typeof item?.mavenArtifactId !== 'string') { + if (typeof item?.mavenGroupId !== 'string' || typeof item?.mavenArtifactId !== 'string' || typeof item?.version?.tagName !== 'string') { return { name: itemName, type: itemType, success: false, - error: `${itemType === 'connector' ? 'Connector' : 'Inbound endpoint'} definition is missing Maven coordinates` + error: `${itemType === 'connector' ? 'Connector' : 'Inbound endpoint'} definition is missing Maven coordinates or version` }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts` around lines 312 - 318, The DependencyDetails construction uses item.version?.tagName without ensuring it's a string; validate that item.version and item.version.tagName are present and a non-empty string before creating the DependencyDetails (in the same validation block that checks item.mavenGroupId/item.mavenArtifactId around where updateAiDependencies is prepared). If the version/tagName is missing or not a string, either skip this dependency or throw a clear error so updateAiDependencies always receives a valid string for DependencyDetails.version; update the logic that builds the dependencies array (referencing DependencyDetails, item.mavenGroupId, item.mavenArtifactId, item.version?.tagName, and updateAiDependencies) to enforce this check.
🧹 Nitpick comments (5)
workspaces/mi/mi-extension/src/ai-features/agent-mode/tool-action-mapper.ts (1)
71-81: Trim connector name before building UI messages.Using the trimmed name avoids odd spacing if the tool input includes leading/trailing whitespace.
♻️ Suggested tweak
- const targetName = toolInput?.name; - if (typeof targetName === 'string' && targetName.trim().length > 0) { + const rawName = toolInput?.name; + const targetName = typeof rawName === 'string' ? rawName.trim() : ''; + if (targetName.length > 0) { return { loading: `fetching ${targetName}`, completed: `fetched ${targetName}`, failed: `failed to fetch ${targetName}` }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tool-action-mapper.ts` around lines 71 - 81, In the CONNECTOR_TOOL_NAME case in tool-action-mapper.ts, trim toolInput?.name before checking length and interpolating it into UI messages: replace the current targetName usage with a trimmedName (e.g., const trimmedName = (toolInput?.name ?? '').trim()), use trimmedName for the length check (trimmedName.length > 0) and interpolate trimmedName into the loading/completed/failed strings, falling back to the existing "connector details" messages when trimmedName is empty.workspaces/mi/mi-extension/.env.example (1)
14-15: Consider aligning key order to satisfy dotenv-linter warnings.dotenv-linter flags both new keys as out-of-order. If ordering is enforced in CI, consider relocating these keys (or running the linter’s auto-fix) to keep warnings clean.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/.env.example` around lines 14 - 15, Dotenv-linter reports these two entries are out-of-order; fix by reordering the keys in .env.example so they follow the file's alphabetical/key ordering (place MI_CONNECTOR_STORE_BACKEND_DETAILS_FILTER before or after MI_CONNECTOR_STORE_BACKEND_SUMMARIES to match the rest of the file), or run dotenv-linter --fix to auto-correct ordering; update the entries MI_CONNECTOR_STORE_BACKEND_SUMMARIES and MI_CONNECTOR_STORE_BACKEND_DETAILS_FILTER accordingly to eliminate the linter warnings.workspaces/mi/mi-extension/src/ai-features/agent-mode/agents/main/prompt.ts (1)
247-264: Defensive default for connector warnings.If
warningscan ever be undefined, the.lengthaccess will throw. A small default keeps prompt generation resilient.♻️ Suggested hardening
- const connectorStoreReminder = connectorCatalog.warnings.length > 0 - ? `Connector store status: ${connectorCatalog.storeStatus}. ${connectorCatalog.warnings.join(' ')}` + const connectorWarnings = connectorCatalog.warnings ?? []; + const connectorStoreReminder = connectorWarnings.length > 0 + ? `Connector store status: ${connectorCatalog.storeStatus}. ${connectorWarnings.join(' ')}` : '';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/agents/main/prompt.ts` around lines 247 - 264, The code assumes connectorCatalog.warnings exists when computing connectorStoreReminder; harden it by treating warnings as optional: use a safe default (e.g., an empty array) when checking length and when joining messages so accessing .length or .join won't throw. Update the expression that builds connectorStoreReminder (referencing connectorCatalog and connectorStoreReminder in this block around getAvailableConnectorCatalog and modeReminder) to use connectorCatalog.warnings || [] (or the optional chaining equivalent) for both the length check and the join.workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts (1)
490-499: Redundant pom.xml parsing —getRuntimeVersionFromPomis called multiple times per request.
resolveDefinitionscallsgetRuntimeVersionFromPominternally (line 498). Whenproject_tools.tsalso callsgetRuntimeVersionFromPomat its line 88, followed bygetConnectorDefinitionsandgetInboundDefinitions(which both callresolveDefinitions), the pom.xml is parsed up to 3 times per operation. Consider accepting the runtime version as a parameter toresolveDefinitions(and the publicgetConnectorDefinitions/getInboundDefinitionswrappers) to avoid redundant I/O and XML parsing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts` around lines 490 - 499, resolveDefinitions currently calls getRuntimeVersionFromPom, causing redundant pom.xml parsing when callers like getConnectorDefinitions and getInboundDefinitions already obtain the runtime version; change resolveDefinitions to accept a runtimeVersion (or detectedRuntimeVersion) parameter instead of calling getRuntimeVersionFromPom internally, update getConnectorDefinitions and getInboundDefinitions to pass the pre-fetched runtime version obtained by getRuntimeVersionFromPom (at project_tools.ts) into resolveDefinitions, and remove the internal getRuntimeVersionFromPom call so pom.xml is parsed only once per operation.workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts (1)
183-183: Fragile string matching to detect store failure errors.
r.error?.includes('connector store is unavailable')at line 183 relies on the exact wording of the error message from line 276. If the message is ever reworded, this silently breaks. Consider adding astoreFailure: booleanfield toProcessItemResultinstead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts` at line 183, The filter is using fragile string matching on r.error at "const storeFailedUnavailable = results.filter(...)" which will break if the error message text changes; add a typed boolean flag (e.g. storeFailure: boolean) to the ProcessItemResult interface/type, set that flag at the site that constructs the result when the connector store-unavailable condition is detected (the same place that currently produces the error text), and update the filter to use r.storeFailure instead of checking r.error.includes(...); update all places that construct ProcessItemResult to populate the new field.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@workspaces/mi/mi-extension/scripts/update-connector-context-db.js`:
- Around line 71-110: The project uses global fetch and AbortController (see
functions fetchWithTimeout, parseResponse, requestJson) which require Node 18+,
so update the mi-extension package.json to add an engines field entry "node":
">=18.0.0" (or merge into existing engines object) to enforce the Node.js 18+
constraint for the workspaces/mi/mi-extension package; ensure package.json
remains valid JSON and commit the change.
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/chat-history-manager.ts`:
- Line 81: SESSION_STORAGE_VERSION was set to 1.0 which is identical to 1 in JS
and is a no-op; decide whether you intended a real incompatible bump or a
cosmetic change and fix accordingly: if you intended to invalidate existing
stored sessions, update SESSION_STORAGE_VERSION to a new integer (e.g. 2) and
ensure any migration logic/readers that use isCompatibleSessionVersion handle
the new value; if the change was cosmetic, revert SESSION_STORAGE_VERSION to 1
and add a brief comment next to the constant clarifying that versions are
integers to avoid future confusion.
In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/context/inbound_db.ts`:
- Around line 2091-2110: The RabbitMQ placeholder connector object with
"connectorName": "RabbitMQ (Inbound)" contains empty repoName and Maven
coordinates which will break add-dependency flows; either remove this object
from the inbound connectors list or populate its metadata (repoName,
mavenGroupId, mavenArtifactId, id and valid version entries) and/or add a flag
like "isPlaceholder": true so the UI/catalog can skip it; locate the JSON entry
for "RabbitMQ (Inbound)" in inbound_db.ts and apply one of these fixes to
prevent the incomplete entry from surfacing.
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts`:
- Around line 562-564: In resolveDefinitions, the writeDefinitionCaches calls
(the ones that pass itemType, runtimeVersionUsed, fallbackDefinition and
aliases) must be made best-effort so a filesystem error doesn't abort the entire
lookup; wrap each writeDefinitionCaches(...) invocation in its own try-catch and
log or ignore the error inside the catch so resolveDefinitions can continue
returning already-resolved definitions (apply the same try-catch pattern to the
three call sites where writeDefinitionCaches is invoked with aliases derived
from toRequestAliases/getDefinitionAliases and the fallbackDefinition).
- Around line 414-422: The current try block lets writeCatalogCache errors
discard fetched data; wrap the writeCatalogCache call in its own try/catch so
failures are best-effort: after calling fetchCatalogFromStore(itemType) store
fetchedItems in a variable, then attempt writeCatalogCache(cachePath, itemType,
runtimeVersion, fetchedItems) inside a try block and on failure catch and log a
warning (including the error and relevant context like
cachePath/itemType/runtimeVersion) without rethrowing, then continue to return {
items: fetchedItems, source: 'store', warnings } so successful fetches are not
lost; update code references to writeCatalogCache, fetchCatalogFromStore,
fetchedItems, cachePath, itemType, runtimeVersion, and warnings accordingly.
In `@workspaces/mi/mi-extension/src/rpc-managers/agent-mode/rpc-manager.ts`:
- Around line 856-866: In loadChatHistory, avoid unconditionally calling
createNewSession for startup sessions; first check whether an active session
already exists for this.projectUri (e.g., inspect this.sessionId /
this.activeSession / any session state that sendAgentMessage would set) and only
run the startup block (startupSessionInitializedProjects.has check +
createNewSession call) when no session is initialized; update the condition
around createNewSession in loadChatHistory to guard against an existing session
so concurrent RPCs like sendAgentMessage don’t cause the startup call to close
or replace an already-active session.
---
Outside diff comments:
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts`:
- Around line 312-318: The DependencyDetails construction uses
item.version?.tagName without ensuring it's a string; validate that item.version
and item.version.tagName are present and a non-empty string before creating the
DependencyDetails (in the same validation block that checks
item.mavenGroupId/item.mavenArtifactId around where updateAiDependencies is
prepared). If the version/tagName is missing or not a string, either skip this
dependency or throw a clear error so updateAiDependencies always receives a
valid string for DependencyDetails.version; update the logic that builds the
dependencies array (referencing DependencyDetails, item.mavenGroupId,
item.mavenArtifactId, item.version?.tagName, and updateAiDependencies) to
enforce this check.
---
Nitpick comments:
In `@workspaces/mi/mi-extension/.env.example`:
- Around line 14-15: Dotenv-linter reports these two entries are out-of-order;
fix by reordering the keys in .env.example so they follow the file's
alphabetical/key ordering (place MI_CONNECTOR_STORE_BACKEND_DETAILS_FILTER
before or after MI_CONNECTOR_STORE_BACKEND_SUMMARIES to match the rest of the
file), or run dotenv-linter --fix to auto-correct ordering; update the entries
MI_CONNECTOR_STORE_BACKEND_SUMMARIES and
MI_CONNECTOR_STORE_BACKEND_DETAILS_FILTER accordingly to eliminate the linter
warnings.
In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/agents/main/prompt.ts`:
- Around line 247-264: The code assumes connectorCatalog.warnings exists when
computing connectorStoreReminder; harden it by treating warnings as optional:
use a safe default (e.g., an empty array) when checking length and when joining
messages so accessing .length or .join won't throw. Update the expression that
builds connectorStoreReminder (referencing connectorCatalog and
connectorStoreReminder in this block around getAvailableConnectorCatalog and
modeReminder) to use connectorCatalog.warnings || [] (or the optional chaining
equivalent) for both the length check and the join.
In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tool-action-mapper.ts`:
- Around line 71-81: In the CONNECTOR_TOOL_NAME case in tool-action-mapper.ts,
trim toolInput?.name before checking length and interpolating it into UI
messages: replace the current targetName usage with a trimmedName (e.g., const
trimmedName = (toolInput?.name ?? '').trim()), use trimmedName for the length
check (trimmedName.length > 0) and interpolate trimmedName into the
loading/completed/failed strings, falling back to the existing "connector
details" messages when trimmedName is empty.
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts`:
- Around line 490-499: resolveDefinitions currently calls
getRuntimeVersionFromPom, causing redundant pom.xml parsing when callers like
getConnectorDefinitions and getInboundDefinitions already obtain the runtime
version; change resolveDefinitions to accept a runtimeVersion (or
detectedRuntimeVersion) parameter instead of calling getRuntimeVersionFromPom
internally, update getConnectorDefinitions and getInboundDefinitions to pass the
pre-fetched runtime version obtained by getRuntimeVersionFromPom (at
project_tools.ts) into resolveDefinitions, and remove the internal
getRuntimeVersionFromPom call so pom.xml is parsed only once per operation.
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts`:
- Line 183: The filter is using fragile string matching on r.error at "const
storeFailedUnavailable = results.filter(...)" which will break if the error
message text changes; add a typed boolean flag (e.g. storeFailure: boolean) to
the ProcessItemResult interface/type, set that flag at the site that constructs
the result when the connector store-unavailable condition is detected (the same
place that currently produces the error text), and update the filter to use
r.storeFailure instead of checking r.error.includes(...); update all places that
construct ProcessItemResult to populate the new field.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
workspaces/mi/mi-extension/.env.exampleworkspaces/mi/mi-extension/scripts/update-connector-context-db.jsworkspaces/mi/mi-extension/src/ai-features/agent-mode/agents/main/agent.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/agents/main/prompt.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/agents/main/system.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/chat-history-manager.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/context/connector_db.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/context/connectors_guide.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/context/inbound_db.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/tool-action-mapper.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_tools.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.tsworkspaces/mi/mi-extension/src/rpc-managers/agent-mode/rpc-manager.tsworkspaces/mi/mi-visualizer/src/views/AIPanel/component/AIChatFooter.tsx
workspaces/mi/mi-extension/src/ai-features/agent-mode/chat-history-manager.ts
Show resolved
Hide resolved
workspaces/mi/mi-extension/src/ai-features/agent-mode/context/inbound_db.ts
Show resolved
Hide resolved
workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts
Outdated
Show resolved
Hide resolved
| const aliases = [...toRequestAliases(name), ...getDefinitionAliases(fallbackDefinition)]; | ||
| await writeDefinitionCaches(itemType, runtimeVersionUsed, fallbackDefinition, aliases); | ||
| continue; |
There was a problem hiding this comment.
Cache write failures in resolveDefinitions will abort the entire lookup.
writeDefinitionCaches calls at lines 563, 576, and 586 are not wrapped in try-catch. If a file-system write fails (disk full, permissions, etc.), the error propagates out of resolveDefinitions, discarding any definitions already successfully resolved. Cache writes should be best-effort — wrap each in a try-catch so that a transient I/O failure doesn't break the primary lookup flow.
🛡️ Proposed fix
definitionsByName[name] = fallbackDefinition;
fallbackUsedNames.push(name);
storeFailureNames.push(name);
const aliases = [...toRequestAliases(name), ...getDefinitionAliases(fallbackDefinition)];
- await writeDefinitionCaches(itemType, runtimeVersionUsed, fallbackDefinition, aliases);
+ try {
+ await writeDefinitionCaches(itemType, runtimeVersionUsed, fallbackDefinition, aliases);
+ } catch (cacheError) {
+ logWarn(`[ConnectorStoreCache] Failed to write definition cache for '${name}': ${cacheError}`);
+ }
continue;Apply the same pattern to the other two writeDefinitionCaches call sites (lines 576 and 586).
Also applies to: 574-577, 584-587
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts`
around lines 562 - 564, In resolveDefinitions, the writeDefinitionCaches calls
(the ones that pass itemType, runtimeVersionUsed, fallbackDefinition and
aliases) must be made best-effort so a filesystem error doesn't abort the entire
lookup; wrap each writeDefinitionCaches(...) invocation in its own try-catch and
log or ignore the error inside the catch so resolveDefinitions can continue
returning already-resolved definitions (apply the same try-catch pattern to the
three call sites where writeDefinitionCaches is invoked with aliases derived
from toRequestAliases/getDefinitionAliases and the fallbackDefinition).
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts (1)
597-604:⚠️ Potential issue | 🟠 Major
writeDefinitionCachesfailures inresolveDefinitionswill abort the entire lookup.The three
writeDefinitionCachescalls at lines 603, 616, and 626 are not wrapped in try-catch. If any file-system write fails (disk full, permissions, etc.), the error propagates out ofresolveDefinitions, discarding all definitions resolved so far. Cache writes should be best-effort, consistent with the pattern already applied forwriteCatalogCacheat lines 476–485.🛡️ Proposed fix — wrap each call in try-catch
const aliases = [...toRequestAliases(name), ...getDefinitionAliases(fallbackDefinition)]; - await writeDefinitionCaches(itemType, runtimeVersionUsed, fallbackDefinition, aliases); + try { + await writeDefinitionCaches(itemType, runtimeVersionUsed, fallbackDefinition, aliases); + } catch (cacheError) { + logWarn(`[ConnectorStoreCache] Failed to write definition cache for '${name}': ${cacheError}`); + } continue;Apply the same pattern to the other two
writeDefinitionCachescall sites (lines 616 and 626).Also applies to: 614-617, 624-627
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts` around lines 597 - 604, In resolveDefinitions the three writeDefinitionCaches calls (the one using fallbackDefinition and the other two around lines 616 and 626) are unguarded and can abort the whole lookup on I/O errors; wrap each writeDefinitionCaches(itemType, runtimeVersionUsed, …) invocation in a try-catch that mirrors the writeCatalogCache pattern: catch errors, log them (including the error and context such as name / aliases / runtimeVersionUsed), and continue without throwing so resolved definitions (definitionsByName, fallbackUsedNames, storeFailureNames) are preserved as best-effort cache writes.
🧹 Nitpick comments (7)
workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts (4)
530-561: Definition cache reads are sequential — could be parallelized.The loop at lines 549–561 reads definition caches one at a time for each requested name. For large batches, this could be slow due to sequential I/O. Consider using
Promise.all(orPromise.allSettled) to parallelize the reads.♻️ Parallelize cache reads
- for (const name of requestedNames) { - const cached = await readDefinitionCacheForName(itemType, runtimeVersionUsed, name); - if (cached.fresh) { - definitionsByName[name] = cached.fresh; - continue; - } - - if (cached.stale) { - staleByName[name] = cached.stale; - } - - namesToFetch.push(name); - } + const cacheResults = await Promise.all( + requestedNames.map(async (name) => ({ + name, + cached: await readDefinitionCacheForName(itemType, runtimeVersionUsed, name), + })) + ); + + for (const { name, cached } of cacheResults) { + if (cached.fresh) { + definitionsByName[name] = cached.fresh; + continue; + } + + if (cached.stale) { + staleByName[name] = cached.stale; + } + + namesToFetch.push(name); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts` around lines 530 - 561, In resolveDefinitions, the loop that calls readDefinitionCacheForName for each name is sequential and should be parallelized: map requestedNames to promises of readDefinitionCacheForName(itemType, runtimeVersionUsed, name), await Promise.all (or Promise.allSettled) on that array, then iterate the results alongside requestedNames to populate definitionsByName, staleByName, namesToFetch, etc.; ensure you preserve the existing logic for cached.fresh and cached.stale and collect failed reads appropriately (use allSettled if you need to handle individual errors without aborting all requests).
650-669:getRuntimeVersionFromPomsilently returnsnullon all errors.The catch-all at line 666 swallows all exceptions (missing file, XML parse errors, permission errors) and returns
null. This is acceptable since the caller falls back toDEFAULT_RUNTIME_VERSION, but a debug log would help diagnose issues when the pom.xml exists but can't be parsed.📝 Add debug logging on parse failure
- } catch { + } catch (error) { + logDebug(`[ConnectorStoreCache] Failed to read runtime version from pom.xml: ${error}`); return null; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts` around lines 650 - 669, The function getRuntimeVersionFromPom currently swallows all errors and returns null; update its catch block to log a debug message including pomPath and the caught error before returning null so failures to read/parse pom.xml are visible; specifically modify getRuntimeVersionFromPom's try/catch to catch (err) and call the project's logger (or console.debug if no logger available) with a clear message referencing pomPath and err, then return null as before.
610-638:fetchedDefinitions.find()is O(n×m) — acceptable for expected sizes but worth noting.For each name in
namesToFetch,fetchedDefinitions.find(...)scans the entire fetched array (line 612). With typical connector counts (tens to low hundreds), this is fine. If the set grows significantly, consider building a lookup map fromfetchedDefinitionsfirst.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts` around lines 610 - 638, The loop over namesToFetch calls fetchedDefinitions.find(...) (via matchesDefinition) for each name causing O(n×m) behavior; to fix, build a lookup map keyed by a stable identifier (e.g., normalized name or the same key used by matchesDefinition) from fetchedDefinitions once before the loop, then replace calls to fetchedDefinitions.find(...) with direct map lookups; update references inside the loop that use fromStore to use the map result and keep the existing behavior around definitionsByName, writeDefinitionCaches, fallback handling, staleByName and missingNames unchanged.
91-91: Cache root directory uses a fixed path under the user's home directory.
CACHE_ROOT_DIR = path.join(os.homedir(), '.wso2-mi', 'copilot', 'cache')— this is a reasonable convention for VS Code extensions and aligns with similar tools. Just noting that this creates a persistent directory outside of the VS Code extension storage path (context.globalStoragePath). If the extension is uninstalled, these cache files will be orphaned. This is likely acceptable for a cache, but worth considering if cleanup on uninstall is desired.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts` at line 91, The hard-coded CACHE_ROOT_DIR uses the user's homedir and can leave orphaned files on uninstall; change CACHE_ROOT_DIR to use the VS Code extension storage path (context.globalStoragePath) when available by accepting/propagating the extension context into the module or a constructor that initializes the cache directory (reference: CACHE_ROOT_DIR in connector_store_cache.ts), fall back to path.join(os.homedir(), '.wso2-mi', 'copilot', 'cache') only if context.globalStoragePath is missing, and ensure the initialization routine (the function that creates the cache dir) creates the directory with fs.mkdirSync(..., { recursive: true }) so the cache lives under the extension-managed storage and is cleaned on uninstall.workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts (1)
126-137: Definition lookup uses exact name as key — may miss ifresolveDefinitionsstores under a different casing.At line 129,
connectorLookup.definitionsByName[connectorName]uses the raw connector name from the input. InsideresolveDefinitions(cache file, line 552),definitionsByName[name]uses the trimmed (but not lowercased) name fromrequestedNames. So the keys should match as long as the caller'sconnectorNamematches exactly (after trim) with what was passed toresolveDefinitions.However,
fallbackUsedNames.includes(connectorName)(line 130) andstoreFailureNames.includes(connectorName)(line 131) useArray.includeswhich is case-sensitive. If there's any case mismatch between the caller's name and the internal name, these checks will silently fail, understating fallback/failure reporting. This is low risk since names flow through the same path, but worth noting.Also applies to: 144-158
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts` around lines 126 - 137, The lookup and includes checks use the raw connectorName which is case-sensitive and may miss matches; normalize names consistently when reading from connectorLookup and when checking fallbackUsedNames/storeFailureNames: compute a normalizedKey (e.g., trimmed and lowercased) for connectorName and use it to access connectorLookup.definitionsByName (or maintain a connectorLookup.normalizedDefinitions map) and to test fallbackUsedNames.includes(normalizedKey) and storeFailureNames.includes(normalizedKey) before calling processItem so comparisons align with resolveDefinitions' stored keys.workspaces/mi/mi-extension/scripts/update-connector-context-db.js (2)
489-496:writeTsArrayFilereads the file a second time — potential TOCTOU if run concurrently.Both
readExistingRecordsByName(line 470) andwriteTsArrayFile(line 490) read the file independently. If twoupdateTargetcalls run concurrently for the same file, a race condition could corrupt the output. Since the script processes targets sequentially (for...ofat line 545), this isn't currently exploitable, but it's worth a defensive comment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/scripts/update-connector-context-db.js` around lines 489 - 496, The writeTsArrayFile function currently re-reads the file causing a TOCTOU risk when paired with readExistingRecordsByName; change the flow so the existing file content (string) is read once by readExistingRecordsByName and passed into writeTsArrayFile (update its signature to accept an existingContent parameter and use that instead of fs.readFile), update call sites (including updateTarget) accordingly, and add a short defensive comment near the for...of processing loop explaining why we avoid a second read to mitigate concurrent-update races.
25-27: Template literal inSUMMARY_URL_TEMPLATEis eagerly evaluated.Line 27 uses a backtick template literal, so
${type}would be interpreted as a JS variable reference (which isundefinedat this point). The\$escape prevents this, but the resulting string contains a literal\character before${type}, so the.replace('${type}', ...)on line 318 won't match because the string actually contains\${type}.However, looking more carefully:
\${type}in a template literal produces the literal string${type}(the backslash escapes the$). So the replace on line 318 should work. This is correct but fragile and confusing — a regular string would be clearer.♻️ Use a plain string to avoid confusion
-const SUMMARY_URL_TEMPLATE = `${API_BASE}/connectors/summaries?type=\${type}&limit=100&offset=0&product=MI`; +const SUMMARY_URL_TEMPLATE = API_BASE + '/connectors/summaries?type=${type}&limit=100&offset=0&product=MI';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/scripts/update-connector-context-db.js` around lines 25 - 27, The SUMMARY_URL_TEMPLATE is written as a backtick template which is confusing and fragile; replace it with a plain string that contains the literal ${type} (e.g. build it via concatenation: API_BASE + '/connectors/summaries?type=${type}&limit=100&offset=0&product=MI') so subsequent .replace('${type}', ...) calls on SUMMARY_URL_TEMPLATE will match unambiguously; update the constant SUMMARY_URL_TEMPLATE accordingly (no runtime interpolation) and keep API_BASE as-is.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@workspaces/mi/mi-extension/scripts/update-connector-context-db.js`:
- Around line 469-487: The function readExistingRecordsByName currently calls
JSON.parse on a slice of TypeScript source (JSON.parse(existing.slice(...)))
which will fail for trailing commas or TS-specific syntax; update this function
to either (a) robustly parse the array by using a TypeScript/JS parser (e.g.,
use `@babel/parser` or TypeScript compiler API) to extract the array literal and
convert it to JSON, or (b) at minimum wrap the JSON.parse call in a try/catch
and throw a much more descriptive error that includes filePath and exportName
and suggests possible causes (trailing commas/single quotes/TS syntax) and the
recommended remediation; also add a short comment above
readExistingRecordsByName documenting the assumption that the file must contain
a pure JSON array written by writeTsArrayFile and that hand-edits may break
parsing so a TS parser should be used if hand-edits are expected.
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts`:
- Line 27: STORE_FETCH_TIMEOUT_MS (used by fetchWithTimeout) is too short for
large connector definition POSTs and causes intermittent failures; update the
code so definition-detail fetches use a longer timeout or accept a per-request
timeout option instead of the single 5000ms constant. Locate usages of
STORE_FETCH_TIMEOUT_MS and fetchWithTimeout in connector_store_cache.ts (the
catalog summary vs. definition detail fetch paths) and either introduce a new
constant like STORE_DEFINITION_FETCH_TIMEOUT_MS with a larger value and use it
for POST/detail requests, or change fetchWithTimeout calls to pass a
configurable timeout parameter for definition fetches; ensure existing summary
fetches still use the original 5000ms where appropriate.
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts`:
- Around line 100-107: emptyLookup currently sets runtimeVersionUsed to
runtimeVersion || 'unknown', which can differ from what getRuntimeVersionUsed()
(which falls back to DEFAULT_RUNTIME_VERSION) reports; update emptyLookup to use
the same fallback so values are consistent by either calling
getRuntimeVersionUsed() or importing DEFAULT_RUNTIME_VERSION from
connector_store_cache.ts and using that instead of the literal 'unknown' for
runtimeVersionUsed in the emptyLookup object (refer to the emptyLookup symbol,
runtimeVersion variable, and getRuntimeVersionUsed/DEFAULT_RUNTIME_VERSION).
In `@workspaces/mi/mi-extension/src/rpc-managers/agent-mode/rpc-manager.ts`:
- Around line 858-877: The startup flow has a race because hasInitializedSession
only observes settled state; add a per-project in-flight lock (e.g., a new Set
like startupSessionInFlightProjects) and use it along with
startupSessionInitializedProjects and this.projectUri to guard createNewSession:
before starting creation, check and set the in-flight lock for this.projectUri,
re-check hasInitializedSession/startupSessionInitializedProjects to avoid
double-creation, then call createNewSession, and finally clear the in-flight
lock and mark startupSessionInitializedProjects.add(this.projectUri) on success
(or clear on failure). Apply the same locking pattern to the similar block
around createNewSession at lines 896-900 so concurrent RPCs cannot both start
startup sessions for the same project.
---
Duplicate comments:
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts`:
- Around line 597-604: In resolveDefinitions the three writeDefinitionCaches
calls (the one using fallbackDefinition and the other two around lines 616 and
626) are unguarded and can abort the whole lookup on I/O errors; wrap each
writeDefinitionCaches(itemType, runtimeVersionUsed, …) invocation in a try-catch
that mirrors the writeCatalogCache pattern: catch errors, log them (including
the error and context such as name / aliases / runtimeVersionUsed), and continue
without throwing so resolved definitions (definitionsByName, fallbackUsedNames,
storeFailureNames) are preserved as best-effort cache writes.
---
Nitpick comments:
In `@workspaces/mi/mi-extension/scripts/update-connector-context-db.js`:
- Around line 489-496: The writeTsArrayFile function currently re-reads the file
causing a TOCTOU risk when paired with readExistingRecordsByName; change the
flow so the existing file content (string) is read once by
readExistingRecordsByName and passed into writeTsArrayFile (update its signature
to accept an existingContent parameter and use that instead of fs.readFile),
update call sites (including updateTarget) accordingly, and add a short
defensive comment near the for...of processing loop explaining why we avoid a
second read to mitigate concurrent-update races.
- Around line 25-27: The SUMMARY_URL_TEMPLATE is written as a backtick template
which is confusing and fragile; replace it with a plain string that contains the
literal ${type} (e.g. build it via concatenation: API_BASE +
'/connectors/summaries?type=${type}&limit=100&offset=0&product=MI') so
subsequent .replace('${type}', ...) calls on SUMMARY_URL_TEMPLATE will match
unambiguously; update the constant SUMMARY_URL_TEMPLATE accordingly (no runtime
interpolation) and keep API_BASE as-is.
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts`:
- Around line 530-561: In resolveDefinitions, the loop that calls
readDefinitionCacheForName for each name is sequential and should be
parallelized: map requestedNames to promises of
readDefinitionCacheForName(itemType, runtimeVersionUsed, name), await
Promise.all (or Promise.allSettled) on that array, then iterate the results
alongside requestedNames to populate definitionsByName, staleByName,
namesToFetch, etc.; ensure you preserve the existing logic for cached.fresh and
cached.stale and collect failed reads appropriately (use allSettled if you need
to handle individual errors without aborting all requests).
- Around line 650-669: The function getRuntimeVersionFromPom currently swallows
all errors and returns null; update its catch block to log a debug message
including pomPath and the caught error before returning null so failures to
read/parse pom.xml are visible; specifically modify getRuntimeVersionFromPom's
try/catch to catch (err) and call the project's logger (or console.debug if no
logger available) with a clear message referencing pomPath and err, then return
null as before.
- Around line 610-638: The loop over namesToFetch calls
fetchedDefinitions.find(...) (via matchesDefinition) for each name causing
O(n×m) behavior; to fix, build a lookup map keyed by a stable identifier (e.g.,
normalized name or the same key used by matchesDefinition) from
fetchedDefinitions once before the loop, then replace calls to
fetchedDefinitions.find(...) with direct map lookups; update references inside
the loop that use fromStore to use the map result and keep the existing behavior
around definitionsByName, writeDefinitionCaches, fallback handling, staleByName
and missingNames unchanged.
- Line 91: The hard-coded CACHE_ROOT_DIR uses the user's homedir and can leave
orphaned files on uninstall; change CACHE_ROOT_DIR to use the VS Code extension
storage path (context.globalStoragePath) when available by accepting/propagating
the extension context into the module or a constructor that initializes the
cache directory (reference: CACHE_ROOT_DIR in connector_store_cache.ts), fall
back to path.join(os.homedir(), '.wso2-mi', 'copilot', 'cache') only if
context.globalStoragePath is missing, and ensure the initialization routine (the
function that creates the cache dir) creates the directory with
fs.mkdirSync(..., { recursive: true }) so the cache lives under the
extension-managed storage and is cleaned on uninstall.
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts`:
- Around line 126-137: The lookup and includes checks use the raw connectorName
which is case-sensitive and may miss matches; normalize names consistently when
reading from connectorLookup and when checking
fallbackUsedNames/storeFailureNames: compute a normalizedKey (e.g., trimmed and
lowercased) for connectorName and use it to access
connectorLookup.definitionsByName (or maintain a
connectorLookup.normalizedDefinitions map) and to test
fallbackUsedNames.includes(normalizedKey) and
storeFailureNames.includes(normalizedKey) before calling processItem so
comparisons align with resolveDefinitions' stored keys.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
workspaces/mi/mi-extension/package.jsonworkspaces/mi/mi-extension/scripts/update-connector-context-db.jsworkspaces/mi/mi-extension/src/ai-features/agent-mode/context/inbound_db.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.tsworkspaces/mi/mi-extension/src/rpc-managers/agent-mode/rpc-manager.ts
✅ Files skipped from review due to trivial changes (1)
- workspaces/mi/mi-extension/package.json
workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts
Show resolved
Hide resolved
workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts
Show resolved
Hide resolved
9791b46 to
0b0b0cb
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts (1)
605-633:⚠️ Potential issue | 🟠 Major
writeDefinitionCachesfailures will abortresolveDefinitions, discarding already-resolved definitions.The three
writeDefinitionCachescalls (lines 608, 621, 631) are not wrapped in try-catch. If a file-system write fails, the error propagates out of the loop, discarding all definitions resolved so far. Cache writes should be best-effort, consistent with howwriteCatalogCacheis handled inloadCatalog(lines 481-490).🛡️ Proposed fix — wrap each call in try-catch
const aliases = [...toRequestAliases(name), ...getDefinitionAliases(fallbackDefinition)]; - await writeDefinitionCaches(itemType, runtimeVersionUsed, fallbackDefinition, aliases); + try { + await writeDefinitionCaches(itemType, runtimeVersionUsed, fallbackDefinition, aliases); + } catch (cacheError) { + logWarn(`[ConnectorStoreCache] Failed to write definition cache for '${name}': ${cacheError}`); + } continue;Apply the same pattern at lines 621 and 631.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts` around lines 605 - 633, The three calls to writeDefinitionCaches inside resolveDefinitions can throw and currently will abort the whole resolution, discarding already-resolved entries; wrap each writeDefinitionCaches call in a try-catch (same pattern used by writeCatalogCache in loadCatalog) so failures are best-effort: catch the error, push a warning or log via warnings/processLogger including the name and error, and continue without rethrowing so resolved definitions already collected (definitionsByName, fallbackUsedNames, etc.) are preserved.
🧹 Nitpick comments (8)
workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts (1)
258-269:resolvedItem: any | nullis redundant —anyalready includesnull.The type annotation
any | nullis equivalent toany. If the intent is to express nullability, use a more specific type (e.g.,Record<string, any> | null) or keep it asanyand document the null semantics via the parameter name or JSDoc.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts` around lines 258 - 269, The resolvedItem parameter in processItem is declared as "any | null", which is redundant; update its type to either a nullable specific shape (e.g., change resolvedItem to Record<string, any> | null or a dedicated interface if you know the fields) or drop the explicit null and keep "any" and document nullability; modify the function signature for processItem and any call sites that pass null to match the new type and update JSDoc or inline comments to reflect whether null is allowed.workspaces/mi/mi-extension/scripts/update-connector-context-db.js (4)
556-559: Dead-code guard:MAX_NAMES_PER_REQUEST > 3can never be true.
MAX_NAMES_PER_REQUESTis aconstset to3on line 31. The runtime check at line 557 will never trigger. If this is intended to catch accidental edits, a code comment would clarify intent; otherwise it can be removed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/scripts/update-connector-context-db.js` around lines 556 - 559, The runtime guard in main() checking MAX_NAMES_PER_REQUEST > 3 is dead code because MAX_NAMES_PER_REQUEST is a const set to 3; either remove the check entirely from the main() function or, if you meant to keep it as a safeguard against future edits, replace the check with a clear comment above the const and the guard explaining it’s intentional (e.g., “intentional runtime guard to catch accidental future changes”), or convert the const to a configurable value so the guard can be meaningful; update the code around MAX_NAMES_PER_REQUEST and the async function main() accordingly and remove any unreachable throw or add the explanatory comment next to the constant and the guard.
428-467: Batch failure silently drops items until the next pass, butmaxPassesis only 3.If a batch consistently fails (e.g., a single problematic connector name causes a server error), all names in that batch will remain missing after 3 passes, with no indication of which batch failed. Consider logging the specific batch names that failed so operators can identify problematic items quickly.
📝 Improve failure logging
} catch (error) { const message = error instanceof Error ? error.message : String(error); - console.warn(`[${type}] batch failed and will be retried in next pass: ${message}`); + console.warn(`[${type}] batch [${batch.join(', ')}] failed and will be retried in next pass: ${message}`); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/scripts/update-connector-context-db.js` around lines 428 - 467, In fetchAllDetails, when a fetchDetailsBatch call throws, enhance the catch to log the specific batch names and context so operators can identify problematic items: include the current type, pass number, batch index (i), batch contents (the array variable batch), and the error message/stack (error) in the warning so failing names are visible; keep the existing retry behavior (maxPasses = 3) and ensure the log is produced inside the catch that surrounds fetchDetailsBatch so any consistently failing batch can be traced.
506-513: Non-atomic file write could leave a corrupted file on crash.
writeTsArrayFilereads the file, splices content, then writes back. If the process crashes mid-write, the TS file could be truncated. Since this is a dev script run manually, the risk is low, but writing to a temporary file and renaming would be safer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/scripts/update-connector-context-db.js` around lines 506 - 513, writeTsArrayFile currently reads the TS file, builds new content and writes back directly which can corrupt the file if the process crashes mid-write; change writeTsArrayFile to write the new content to a temporary file in the same directory (e.g., filePath + ".tmp" or using a unique suffix), flush and close it, then atomically replace the original by renaming the temp file to filePath (use fs.rename); keep the same content creation logic that uses findExportArrayRange and ensure the temp file is written with the same encoding and permissions before renaming to guarantee an atomic swap.
104-222: Template literal${...}interpolation expressions are not tracked by the bracket-matcher.
findMatchingArrayEndcorrectly skips over template literal strings but doesn't handle${expr}interpolations inside them. If a template literal like`foo ${[1,2]}`appeared in the array content, the[inside the interpolation would bumpdepthand could produce a wrong match.Since the file documents that content is pure
JSON.stringifyoutput (line 469-471), this isn't a runtime risk today — but the parser complexity suggests robustness is a goal, so it's worth noting for future maintainers.📝 Consider adding a comment about the limitation
if (char === '`') { inTemplateLiteral = true; + // NOTE: Template-literal ${…} interpolations are not tracked. + // This is safe as long as content is machine-generated JSON. continue; }workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts (3)
359-363:getSummaryUrldoesn't encode the type value in the URL replacement.
template.replace('${type}', typeValue)inserts the rawtypeValue('Connector' or 'Inbound') withoutencodeURIComponent. These specific values are safe, but if the template or type values ever change to include special characters, the URL could break. The dev script (line 318) does useencodeURIComponent.📝 Defensive encoding
- return template.replace('${type}', typeValue); + return template.replace('${type}', encodeURIComponent(typeValue));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts` around lines 359 - 363, getSummaryUrl currently substitutes the type string raw into the template using template.replace('${type}', typeValue); make this defensive by URL-encoding the replacement: compute an encodedType = encodeURIComponent(typeValue) and use that in the replacement so the returned URL is safe even if templates or type values change; update the getSummaryUrl function (and ensure it still falls back to DEFAULT_SUMMARY_ENDPOINT_TEMPLATE) to use the encoded value.
92-94: Module-levelpendingCacheFileWritesmap grows unboundedly if write promises are never cleaned up.The cleanup in
enqueueFileWrite(line 277) only deletes entries whose promise reference matches. This is correct for normal flow, but if an exception occurs in the write that isn't caught (e.g.,mkdirthrows synchronously — unlikely but possible), the entry could linger. In practice the.finally()handler should always run, so this is a very minor concern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts` around lines 92 - 94, The module-level pendingCacheFileWrites map can retain entries if the write promise creation throws synchronously; update enqueueFileWrite so the write is wrapped in Promise.resolve().then(() => { /* existing async write logic */ }) to ensure any synchronous exceptions become rejections, and move the pendingCacheFileWrites.delete(key) into a .finally() (or both .catch() and .finally()) on that wrapped promise so the map entry is always removed regardless of sync or async failures; keep references to pendingCacheFileWrites and the enqueueFileWrite function names when making the change.
554-566: Sequential cache reads may be slow for large name lists.Each name triggers a sequential
readDefinitionCacheForNamecall (which itself reads multiple alias files). For, say, 20 connector names, this means 40+ sequential file reads before any store fetch begins. Consider parallelizing withPromise.all(or a bounded concurrency helper) to reduce I/O latency.♻️ Example parallel approach
- for (const name of requestedNames) { - const cached = await readDefinitionCacheForName(itemType, runtimeVersionUsed, name); - if (cached.fresh) { - definitionsByName[name] = cached.fresh; - continue; - } - - if (cached.stale) { - staleByName[name] = cached.stale; - } - - namesToFetch.push(name); - } + const cacheResults = await Promise.all( + requestedNames.map(async (name) => ({ + name, + cached: await readDefinitionCacheForName(itemType, runtimeVersionUsed, name), + })) + ); + + for (const { name, cached } of cacheResults) { + if (cached.fresh) { + definitionsByName[name] = cached.fresh; + continue; + } + if (cached.stale) { + staleByName[name] = cached.stale; + } + namesToFetch.push(name); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts` around lines 554 - 566, The loop over requestedNames performs sequential awaits on readDefinitionCacheForName causing slow I/O; change it to kick off reads in parallel (e.g., map requestedNames to promises using readDefinitionCacheForName and await Promise.all, or use a bounded-concurrency helper) and then iterate the resolved results to populate definitionsByName, staleByName, and namesToFetch while preserving the existing logic for cached.fresh and cached.stale; reference the variables/functions requestedNames, readDefinitionCacheForName, definitionsByName, staleByName, and namesToFetch when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts`:
- Around line 183-184: The filter is fragile because it relies on the error
string; update the ProcessItemResult type (returned by processItem) to include a
boolean storeFailure flag, set storeFailure = true inside processItem when
resolvedItem is null due to a store lookup failure (instead of relying on the
error text), and then change the caller that computes storeFailedUnavailable
(where results is filtered) to use r.storeFailure rather than
r.error?.includes(...); reference processItem and ProcessItemResult to locate
the changes and ensure any places that construct ProcessItemResult are updated
to populate the new flag.
---
Duplicate comments:
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts`:
- Around line 605-633: The three calls to writeDefinitionCaches inside
resolveDefinitions can throw and currently will abort the whole resolution,
discarding already-resolved entries; wrap each writeDefinitionCaches call in a
try-catch (same pattern used by writeCatalogCache in loadCatalog) so failures
are best-effort: catch the error, push a warning or log via
warnings/processLogger including the name and error, and continue without
rethrowing so resolved definitions already collected (definitionsByName,
fallbackUsedNames, etc.) are preserved.
---
Nitpick comments:
In `@workspaces/mi/mi-extension/scripts/update-connector-context-db.js`:
- Around line 556-559: The runtime guard in main() checking
MAX_NAMES_PER_REQUEST > 3 is dead code because MAX_NAMES_PER_REQUEST is a const
set to 3; either remove the check entirely from the main() function or, if you
meant to keep it as a safeguard against future edits, replace the check with a
clear comment above the const and the guard explaining it’s intentional (e.g.,
“intentional runtime guard to catch accidental future changes”), or convert the
const to a configurable value so the guard can be meaningful; update the code
around MAX_NAMES_PER_REQUEST and the async function main() accordingly and
remove any unreachable throw or add the explanatory comment next to the constant
and the guard.
- Around line 428-467: In fetchAllDetails, when a fetchDetailsBatch call throws,
enhance the catch to log the specific batch names and context so operators can
identify problematic items: include the current type, pass number, batch index
(i), batch contents (the array variable batch), and the error message/stack
(error) in the warning so failing names are visible; keep the existing retry
behavior (maxPasses = 3) and ensure the log is produced inside the catch that
surrounds fetchDetailsBatch so any consistently failing batch can be traced.
- Around line 506-513: writeTsArrayFile currently reads the TS file, builds new
content and writes back directly which can corrupt the file if the process
crashes mid-write; change writeTsArrayFile to write the new content to a
temporary file in the same directory (e.g., filePath + ".tmp" or using a unique
suffix), flush and close it, then atomically replace the original by renaming
the temp file to filePath (use fs.rename); keep the same content creation logic
that uses findExportArrayRange and ensure the temp file is written with the same
encoding and permissions before renaming to guarantee an atomic swap.
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.ts`:
- Around line 359-363: getSummaryUrl currently substitutes the type string raw
into the template using template.replace('${type}', typeValue); make this
defensive by URL-encoding the replacement: compute an encodedType =
encodeURIComponent(typeValue) and use that in the replacement so the returned
URL is safe even if templates or type values change; update the getSummaryUrl
function (and ensure it still falls back to DEFAULT_SUMMARY_ENDPOINT_TEMPLATE)
to use the encoded value.
- Around line 92-94: The module-level pendingCacheFileWrites map can retain
entries if the write promise creation throws synchronously; update
enqueueFileWrite so the write is wrapped in Promise.resolve().then(() => { /*
existing async write logic */ }) to ensure any synchronous exceptions become
rejections, and move the pendingCacheFileWrites.delete(key) into a .finally()
(or both .catch() and .finally()) on that wrapped promise so the map entry is
always removed regardless of sync or async failures; keep references to
pendingCacheFileWrites and the enqueueFileWrite function names when making the
change.
- Around line 554-566: The loop over requestedNames performs sequential awaits
on readDefinitionCacheForName causing slow I/O; change it to kick off reads in
parallel (e.g., map requestedNames to promises using readDefinitionCacheForName
and await Promise.all, or use a bounded-concurrency helper) and then iterate the
resolved results to populate definitionsByName, staleByName, and namesToFetch
while preserving the existing logic for cached.fresh and cached.stale; reference
the variables/functions requestedNames, readDefinitionCacheForName,
definitionsByName, staleByName, and namesToFetch when making the change.
In
`@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts`:
- Around line 258-269: The resolvedItem parameter in processItem is declared as
"any | null", which is redundant; update its type to either a nullable specific
shape (e.g., change resolvedItem to Record<string, any> | null or a dedicated
interface if you know the fields) or drop the explicit null and keep "any" and
document nullability; modify the function signature for processItem and any call
sites that pass null to match the new type and update JSDoc or inline comments
to reflect whether null is allowed.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
workspaces/mi/mi-extension/scripts/update-connector-context-db.jsworkspaces/mi/mi-extension/src/ai-features/agent-mode/context/synapse_expression_guide.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/tools/connector_store_cache.tsworkspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.tsworkspaces/mi/mi-extension/src/rpc-managers/agent-mode/rpc-manager.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- workspaces/mi/mi-extension/src/rpc-managers/agent-mode/rpc-manager.ts
| const fallbackUsed = results.filter(r => r.success && r.usedFallback); | ||
| const storeFailedUnavailable = results.filter(r => !r.success && r.error?.includes('connector store is unavailable')); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Fragile string matching to detect store-failure items.
r.error?.includes('connector store is unavailable') on line 184 is tightly coupled to the exact error message constructed in processItem (line 277). If the wording changes, this filter silently stops matching. Consider using the structured storeFailure boolean from the lookup result instead of parsing error strings.
♻️ Proposed: propagate `storeFailure` through ProcessItemResult
interface ProcessItemResult {
name: string;
type: 'connector' | 'inbound';
success: boolean;
alreadyAdded?: boolean;
usedFallback?: boolean;
+ storeFailure?: boolean;
error?: string;
}Then in processItem, set storeFailure in the result when resolvedItem is null due to a store failure:
- return {
- name: itemName,
- type: itemType,
- success: false,
- error: storeFailure
- ? `... connector store is unavailable ...`
- : `... not found in connector store or fallback`
- };
+ return {
+ name: itemName,
+ type: itemType,
+ success: false,
+ storeFailure,
+ error: storeFailure
+ ? `... connector store is unavailable ...`
+ : `... not found in connector store or fallback`
+ };And in the caller:
-const storeFailedUnavailable = results.filter(r => !r.success && r.error?.includes('connector store is unavailable'));
+const storeFailedUnavailable = results.filter(r => !r.success && r.storeFailure);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workspaces/mi/mi-extension/src/ai-features/agent-mode/tools/project_tools.ts`
around lines 183 - 184, The filter is fragile because it relies on the error
string; update the ProcessItemResult type (returned by processItem) to include a
boolean storeFailure flag, set storeFailure = true inside processItem when
resolvedItem is null due to a store lookup failure (instead of relying on the
error text), and then change the caller that computes storeFailedUnavailable
(where results is filtered) to use r.storeFailure rather than
r.error?.includes(...); reference processItem and ProcessItemResult to locate
the changes and ensure any places that construct ProcessItemResult are updated
to populate the new flag.
Fixes wso2/mi-vscode#1404
Summary by CodeRabbit
New Features
Improvements
Chores
UI