Add plan-based Claude/OpenAI connections with OAuth#509
Conversation
- Introduced OpenAI Codex as a new provider type with OAuth support. - Implemented authentication flow including token exchange and callback server. - Updated settings schema to include Codex provider and associated OAuth fields. - Enhanced chat and provider management to accommodate Codex integration. - Added migration logic to transition existing settings to support the new provider. - Comprehensive tests added for migration and Codex functionality.
- Added comments explaining the use of Node's http/https modules for Codex endpoints due to CORS restrictions. - Updated the nodePost function to throw an error when called on mobile platforms, ensuring proper platform handling. - Refactored the import of IncomingMessage type for better clarity and consistency.
- Introduced the Anthropic Claude Code provider, including OAuth integration for authentication. - Implemented the necessary authentication flow, including token exchange and authorization URL generation. - Updated settings schema to include the new provider type and associated OAuth fields. - Enhanced provider management and chat model handling to accommodate Claude Code integration. - Added migration logic to transition existing settings to support the new provider. - Comprehensive tests added for migration and Claude Code functionality.
- Upgraded OpenAI package from version 4.91.1 to 6.8.1 in package.json and package-lock.json. - Refactored DeepSeekMessageAdapter and PerplexityMessageAdapter to normalize tool calls using the new method. - Enhanced OpenAIMessageAdapter with a new normalizeToolCalls method to streamline tool call handling. - Updated NoStainlessOpenAI to improve header management in requests.
- Introduced settings for the OpenAI Codex model, allowing users to configure reasoning effort and summary. - Enhanced CodexMessageAdapter to handle reasoning summary extraction and inclusion in responses. - Updated OpenAICodexProvider to normalize requests with reasoning parameters. - Modified chat model schema to accommodate new reasoning fields. - Added utility functions for reasoning summary extraction in response payloads.
…AI and Anthropic models - Reintroduced OpenAI provider settings in the constants file for consistency. - Updated DEFAULT_PROVIDERS and DEFAULT_CHAT_MODELS to include new models for OpenAI Codex and Anthropic Claude Code. - Enhanced migration logic to ensure proper transition to version 15, incorporating new default providers and chat models.
…ropic models - Replaced 'openai-codex' and 'anthropic-claude-code' with 'openai-plan' and 'anthropic-plan' in constants and provider settings. - Updated DEFAULT_PROVIDERS and DEFAULT_CHAT_MODELS to reflect new provider types. - Adjusted migration logic to ensure compatibility with the new provider types. - Enhanced related components and schemas to support the updated provider structure.
…opic plans - Added new PlanConnectionsSection to manage connections for OpenAI and Claude subscriptions. - Introduced ConnectClaudePlanModal and ConnectOpenAIPlanModal for OAuth authentication flows. - Enhanced styles for plan connection cards and status indicators. - Updated SettingsTabRoot to include the new PlanConnectionsSection. - Refactored ProvidersSection to exclude subscription providers from API key management.
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds plan-based LLM provider support (Claude Plan, OpenAI Codex): new OAuth PKCE flows, provider implementations and adapters, HTTP/SSE utilities, settings UI and modals, schema migration to v15, constants and types updates, plus wiring to propagate provider token refreshes into settings. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Modal as ConnectClaudePlanModal
participant OAuth as Claude OAuth
participant TokenEP as Token Endpoint
participant Settings as Settings Context
User->>Modal: Click "Connect"
Modal->>Modal: Generate PKCE & state
Modal->>OAuth: Open authorize URL (browser)
OAuth-->>User: Consent screen
User->>Modal: Paste redirect URL / code
Modal->>TokenEP: POST code + verifier
TokenEP-->>Modal: Return accessToken, refreshToken, expiresAt
Modal->>Settings: Persist tokens to provider config
Settings-->>Modal: Confirm update
Modal->>User: Show connected status
sequenceDiagram
actor User
participant Modal as ConnectOpenAIPlanModal
participant CallbackSrv as Local Callback Server
participant OAuth as OpenAI OAuth
participant TokenEP as Token Endpoint
participant Settings as Settings Context
User->>Modal: Click "Login"
Modal->>Modal: Generate PKCE & state
Modal->>CallbackSrv: Start local callback server
Modal->>OAuth: Open authorize URL
OAuth-->>User: Consent screen
User->>OAuth: Approve & redirect
OAuth->>CallbackSrv: Redirect with code & state
CallbackSrv->>Modal: Return authorization code
Modal->>TokenEP: POST code + verifier
TokenEP-->>Modal: Return tokens
Modal->>Settings: Persist provider tokens
Modal->>CallbackSrv: Stop server
Modal->>User: Close modal, show connected status
sequenceDiagram
actor User
participant Chat as Chat Component
participant Manager as LLM Manager
participant Provider as Plan Provider
participant OAuth as Token Endpoint
participant Adapter as Message Adapter
participant Settings as Settings Context
User->>Chat: Send message (plan model)
Chat->>Manager: getChatModelClient(model, settings, setSettings)
Manager->>Provider: Instantiate with onProviderUpdate
Provider->>Provider: getAuthHeaders()
alt tokens expired
Provider->>OAuth: POST refresh_token
OAuth-->>Provider: New tokens
Provider->>Manager: onProviderUpdate(providerId, update)
Manager->>Settings: setSettings(updatedSettings)
end
Provider->>Adapter: generateResponse / streamResponse
Adapter-->>Provider: LLM response
Provider-->>Chat: Return response
Chat-->>User: Render response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/components/chat-view/useChatStreamManager.ts (1)
53-87: ThesetSettingsparameter is correctly propagated to both code paths; however, there is a confirmed performance issue upstream.The code in
useChatStreamManager.tsis correct, butsetSettingsis not a stable reference. Insrc/ChatView.tsx(lines 78-80),setSettingsis created as an inline arrow function withoutuseCallback, causing it to be recreated on every render. This forcesSettingsProvider'suseMemoand all consumers (including thisuseMemo) to recalculate unnecessarily.Wrap
setSettingsinuseCallbackinChatView.tsxto stabilize the reference and prevent cascade re-renders:Example fix
const handleSetSettings = useCallback((newSettings) => { this.plugin.setSettings(newSettings) }, [])Then pass
handleSetSettingsinstead of an inline function.
🤖 Fix all issues with AI agents
In `@src/components/settings/modals/ConnectClaudePlanModal.tsx`:
- Around line 65-113: The connect function currently uses the raw code and never
validates OAuth state; update connect to accept a full redirect URL by parsing
query parameters (extract both "code" and "state" from the pasted input),
validate that the extracted state matches the expected state variable before
calling exchangeClaudeCodeForTokens, and if the state is missing or mismatched
show a Notice and abort; then pass the parsed code (not the raw input) plus
pkceVerifier/state into exchangeClaudeCodeForTokens and proceed to update the
provider oauth tokens as before (refer to connect, code, state, pkceVerifier,
and exchangeClaudeCodeForTokens to locate the changes).
In `@src/components/settings/modals/ConnectOpenAIPlanModal.tsx`:
- Around line 45-195: The manual flow currently only extracts the authorization
code via extractCodeFromRedirectUrl but ignores the OAuth state; update
connectWithRedirectUrl to also extract and validate the state parameter from the
redirectUrl (either extend extractCodeFromRedirectUrl to return state or add an
extractStateFromRedirectUrl helper), call ensureAuthContext() to get the
expected state (or use the stored state variable), compare the returned state to
the expected state before calling exchangeCodexCodeForTokens, and if they do not
match setManualError to a clear message and abort the flow (do not call
exchangeCodexCodeForTokens or applyTokens). Reference
extractCodeFromRedirectUrl, ensureAuthContext, connectWithRedirectUrl,
setManualError, and exchangeCodexCodeForTokens when making the change.
In `@src/components/settings/sections/PlanConnectionsSection.tsx`:
- Around line 53-63: The onConfirm callback captures the outer settings value
and can produce a stale update; change the setSettings call in onConfirm to use
a functional update so it receives the latest state (use the updater form that
accepts previous settings and returns { ...previous, providers:
previous.providers.map(...) } applying the providerId/providerType match and
clearing oauth). Update the references inside to use the updater's parameter
instead of the closed-over settings and keep the same provider matching logic in
the mapping; ensure setSettings supports functional updates before applying.
In `@src/core/llm/anthropicClaudeCodeProvider.ts`:
- Around line 86-123: The refresh logic in getAuthHeaders overwrites the stored
refresh token with undefined when the token endpoint omits refresh_token; update
the refresh handling in getAuthHeaders (inside the try block where
refreshClaudeCodeAccessToken is called) to preserve the existing refresh token
when tokens.refresh_token is undefined by setting updatedOauth.refreshToken to
tokens.refresh_token ?? this.provider.oauth.refreshToken (or equivalent), so the
stored refresh token is retained for future refreshes and only replaced when a
new refresh_token is present; keep the rest of the updatedOauth fields as-is and
ensure onProviderUpdate persists the preserved token.
In `@src/core/llm/claudeCodeAuth.ts`:
- Around line 95-108: postTokenRequest currently uses postJson to send a JSON
body to CLAUDE_CODE_OAUTH_TOKEN_ENDPOINT, but the OAuth token endpoint must use
application/x-www-form-urlencoded (UTF-8) per RFC 6749 §4.1.3; replace the JSON
POST with a form-encoded POST: convert the filtered payload to a URL-encoded
string (key=value&...), set the Content-Type header to
"application/x-www-form-urlencoded; charset=UTF-8", and send that to
CLAUDE_CODE_OAUTH_TOKEN_ENDPOINT (do this inside postTokenRequest or call a
helper that performs form-encoded POST instead of postJson) so Claude Code token
requests are encoded correctly.
In `@src/core/llm/codexAuth.ts`:
- Around line 40-44: codexCallbackServer and isCodexCallbackStopping are
module-level singletons that break under concurrent OAuth flows; replace them
with per-flow state (e.g., create a CodexCallbackSession object or factory) that
holds a Server instance and a stopping flag for each flow, or maintain a Map
keyed by a unique flowId to store {server, isStopping, timeoutHandle}; update
start/stop logic to accept/return the session/flowId, use CALLBACK_TIMEOUT_MS
from the session control, and ensure proper cleanup/clearTimeout on completion
so concurrent connect attempts do not clobber each other's state.
In `@src/core/llm/manager.ts`:
- Around line 41-53: The onProviderUpdate closure uses a stale settings
snapshot; instead ensure you base the provider merge on the most recent state:
either (A) refactor setSettings to accept a functional updater and change
onProviderUpdate to call setSettings(prev => ({ ...prev, providers:
prev.providers.map(...) })) using the targetProviderId and update, or (B) if you
cannot change setSettings now, fetch the freshest settings before writing (e.g.,
call the current settings getter or reload settings into a local latestSettings
variable) and then compute updatedProviders from latestSettings.providers before
calling setSettings({ ...latestSettings, providers: updatedProviders }); update
references: onProviderUpdate, setSettings, settings, LLMProvider.
In `@src/core/llm/openaiCodexProvider.ts`:
- Around line 82-124: In getAuthHeaders(), when handling the
refreshCodexAccessToken response, preserve the existing refresh token if
tokens.refresh_token is missing by falling back to
this.provider.oauth.refreshToken (instead of assigning undefined) when building
updatedOauth; update the construction of updatedOauth (and any accountId
fallback via extractCodexAccountId) so refreshToken uses tokens.refresh_token ??
this.provider.oauth.refreshToken to avoid forcing logins when the refresh
response omits a refresh_token.
In `@src/utils/llm/httpTransport.ts`:
- Around line 134-146: When signal.aborted is true we call request.destroy(...)
but the surrounding promise may never settle; update the abort-precheck in the
http transport so it explicitly triggers the same rejection path as the request
'error' handler: either invoke the existing error handler or directly reject the
promise with a new AbortError when you detect signal.aborted (use the same Error
message used by the abortHandler), and keep the existing cleanup
(removeEventListener) logic. Refer to the variables and handlers in this file:
signal.aborted check, request, the abortHandler, and the existing 'error' event
handler to ensure you call the same rejection/cleanup flow synchronously when
pre-aborted.
In `@src/utils/llm/sse.ts`:
- Around line 102-114: parseSseEvent currently splits rawEvent on '\n' which
misses events using lone '\r' line endings; normalize line endings before
splitting (or split using a regex like /\r\n|\n|\r/) so the subsequent .map and
.filter operate correctly; update the parseSseEvent implementation (referencing
function name parseSseEvent and related logic that trims trailing '\r' and
filters 'data:' lines) to normalize line endings first, then continue extracting
and parsing the 'data:' lines as before.
🧹 Nitpick comments (14)
src/components/settings/sections/McpSection.tsx (1)
238-260: Inconsistent icon size for Disconnected status.The
CircleMinusicon uses size 14 while all other status icons (Check,Loader2,X) use size 16. This creates visual inconsistency. If intentional for optical balance, consider adding a comment explaining the rationale.♻️ Suggested fix for consistency
[McpServerStatus.Disconnected]: { - icon: <CircleMinus size={14} />, + icon: <CircleMinus size={16} />, label: 'Disconnected', statusClass: 'smtcmp-mcp-server-status-badge--disconnected', },src/types/embedding-model.types.ts (2)
22-22: Consider tracking the TODO for provider list de-duplication.The TODO comment highlights a valid architectural concern. Consider opening an issue to track this technical debt and prevent it from growing as more providers are added.
Would you like me to help draft an issue to track this refactoring task?
22-31: Remove unusedanthropic-planandopenai-planfrom embedding model schema, or implement actual embedding support.The embedding schema includes
anthropic-planandopenai-planproviders (lines 25–31), but they are never instantiated inDEFAULT_EMBEDDING_MODELS—onlyopenai,gemini, andollamaare used for embeddings. Plan providers appear exclusively in chat models. Remove these unused schema entries or implement proper embedding support if these providers will eventually support embeddings.src/utils/llm/httpTransport.ts (2)
92-150: Consider adding a request timeout.Network requests without timeouts can hang indefinitely if the server doesn't respond. Consider adding an optional timeout parameter or a reasonable default timeout to prevent resource leaks.
♻️ Timeout implementation sketch
async function nodePost( endpoint: string, body: string, headers?: Record<string, string>, signal?: AbortSignal, timeoutMs: number = 30000, // 30s default ): Promise<IncomingMessage> { // ... existing code ... return new Promise((resolve, reject) => { const request = client.request(/* ... */) const timeoutId = setTimeout(() => { request.destroy(new Error('Request timed out')) reject(new Error('Request timed out')) }, timeoutMs) request.on('error', (error) => { clearTimeout(timeoutId) reject(error) }) // Clear timeout on successful response // ... rest of implementation }) }
39-41: Error messages lack response body context in fetchFn paths.When using the
fetchFnpath, error messages only include the status code but not the response body, making debugging harder. The nodePost path includes the body in errors (line 50, 86).♻️ Include response body in error messages
if (!response.ok) { - throw new Error(`Request failed: ${response.status}`) + const errorBody = await response.text().catch(() => '') + throw new Error(`Request failed: ${response.status} ${errorBody}`) }Also applies to: 75-77
src/utils/llm/sse.ts (1)
61-81: Consider optimizing boundary detection for performance.The current implementation searches for all three boundary types on every call, which is fine for typical SSE payloads. However, for high-throughput streams, you could short-circuit after finding the first boundary if it's at index 0, or track the expected boundary type after the first detection.
src/core/llm/codexAuth.ts (2)
101-207: Callback server implementation is solid but consider adding connection timeout.The server handles the happy path and error cases well. However, individual HTTP connections don't have a timeout—a slow client could hold the connection indefinitely. Node's
server.setTimeout()could be useful here.Additionally, consider destroying active connections in
finalizeto ensure prompt cleanup:🔧 Suggested enhancement
server.on('error', (error) => { finalize( error instanceof Error ? error : new Error('OAuth callback server error'), ) }) + server.setTimeout(30000) // 30s connection timeout + server.listen(port, hostname, () => { codexCallbackServer = server })
273-280: Minor bias in random string generation.Using
b % chars.lengthintroduces a slight bias since 256 is not evenly divisible by 66 (the character set length). For cryptographic purposes this is negligible, but you could use rejection sampling for a more uniform distribution.src/core/llm/claudeCodeAuth.ts (2)
81-93: Clarify thecode#stateparsing logic.The
parseAuthorizationCodefunction splits on#to extract state from the code. This appears to handle a specific edge case where state is appended to the authorization code. Consider adding a comment explaining when this format occurs to aid future maintainers.+/** + * Parses authorization code that may contain state appended after a '#' fragment. + * This handles the case where the OAuth callback includes state in the code parameter. + */ function parseAuthorizationCode( code: string, state?: string, ): { code: string; state?: string } {
110-126: Code duplication:generateRandomStringandbase64UrlEncodeare identical to codexAuth.ts.These utility functions are duplicated between
codexAuth.tsandclaudeCodeAuth.ts. Consider extracting them to a shared module (e.g.,src/utils/crypto.tsorsrc/utils/oauth.ts).src/settings/schema/migrations/14_to_15.test.ts (1)
3-27: Tests cover basic cases but consider adding edge case coverage.The tests verify version increment and provider addition, which is good. Consider adding tests for:
- Migration when
anthropic-planoropenai-planproviders already exist (idempotency)- Migration with empty providers array
- Preservation of existing provider data (API keys, other settings)
💡 Additional test cases to consider
it('should preserve existing providers and their data', () => { const oldSettings = { version: 14, providers: [ { type: 'anthropic', id: 'anthropic', apiKey: 'test-key' }, ], chatModels: [], } const result = migrateFrom14To15(oldSettings) const providers = result.providers as { type: string; id: string; apiKey?: string }[] const anthropic = providers.find((p) => p.type === 'anthropic') expect(anthropic?.apiKey).toBe('test-key') }) it('should not duplicate plan providers if already present', () => { const oldSettings = { version: 14, providers: [ { type: 'anthropic-plan', id: 'anthropic-plan' }, ], chatModels: [], } const result = migrateFrom14To15(oldSettings) const providers = result.providers as { type: string }[] const planProviders = providers.filter((p) => p.type === 'anthropic-plan') expect(planProviders.length).toBe(1) })src/components/settings/sections/PlanConnectionsSection.tsx (1)
151-173: Consider using dedicated CSS class names for plan connection badges.The badge component reuses MCP server status badge CSS classes (
smtcmp-mcp-server-status-badge,smtcmp-mcp-server-status-badge--connected). While this works for styling consistency, it may cause confusion during maintenance. Consider either extracting shared badge styles to a generic class or creating dedicated plan connection badge classes.src/types/provider.types.ts (1)
17-41: Consider extracting the shared OAuth schema to reduce duplication.The
oauthobject schema is identical betweenanthropic-planandopenai-plan. Extracting it to a reusable schema would improve maintainability.Suggested refactor
+const oauthSchema = z.object({ + accessToken: z.string(), + refreshToken: z.string(), + expiresAt: z.number(), + accountId: z.string().optional(), +}) + export const llmProviderSchema = z.discriminatedUnion('type', [ z.object({ type: z.literal('anthropic-plan'), ...baseLlmProviderSchema.shape, - oauth: z - .object({ - accessToken: z.string(), - refreshToken: z.string(), - expiresAt: z.number(), - accountId: z.string().optional(), - }) - .optional(), + oauth: oauthSchema.optional(), }), z.object({ type: z.literal('openai-plan'), ...baseLlmProviderSchema.shape, - oauth: z - .object({ - accessToken: z.string(), - refreshToken: z.string(), - expiresAt: z.number(), - accountId: z.string().optional(), - }) - .optional(), + oauth: oauthSchema.optional(), }),src/components/settings/sections/models/ChatModelSettings.tsx (1)
232-234: WidentypedModelfor anthropic-plan to avoid a misleading cast.Since the check now includes
'anthropic-plan', reflect that in the type for clarity and safety.♻️ Suggested tweak
- const typedModel = model as ChatModel & { providerType: 'anthropic' } + const typedModel = model as ChatModel & { + providerType: 'anthropic' | 'anthropic-plan' + }
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/components/settings/sections/ProvidersSection.tsx (1)
124-131: Minor text inconsistency: "Set API key" vs "Credential" header.The column header was changed to "Credential" (line 115), but the placeholder text on line 130 still says "Set API key". Consider updating for consistency.
📝 Suggested fix
<td className="smtcmp-settings-table-api-key" onClick={() => { new EditProviderModal(app, plugin, provider).open() }} > - {provider.apiKey ? '••••••••' : 'Set API key'} + {provider.apiKey ? '••••••••' : 'Set credential'} </td>
🤖 Fix all issues with AI agents
In `@package.json`:
- Line 72: The package bump to "openai": "^6.8.1" breaks all v4-style usage
(e.g., client.chat.completions.create(), ChatCompletion/ChatCompletionChunk
types, streaming deltas and tool/usage fields); either revert the dependency to
a v4.x release in package.json or migrate all OpenAI calls to the new Responses
API: replace client.chat.completions.create() usages with
client.responses.create(), update handling of response objects (replace
ChatCompletion/ChatCompletionChunk types), refactor streaming logic to the
event-driven Responses API format, and adjust access to function/tool outputs
and usage fields per the Responses API migration guide.
In `@src/constants.ts`:
- Around line 10-16: Replace the undocumented ChatGPT backend URL in the
CODEX_RESPONSES_ENDPOINT constant with the official OpenAI Responses API; update
the value of CODEX_RESPONSES_ENDPOINT to use
"https://api.openai.com/v1/responses" so the code uses the public, documented
endpoint referenced by the CODEX_RESPONSES_ENDPOINT symbol.
In `@src/core/llm/claudeCodeAuth.ts`:
- Around line 104-107: The token request currently uses postJson which sends
application/json; replace the
postJson<ClaudeCodeTokenResponse>(CLAUDE_CODE_OAUTH_TOKEN_ENDPOINT, payload)
call in claudeCodeAuth.ts with a form-urlencoded POST: serialize the payload
with URLSearchParams (or equivalent), send it to
CLAUDE_CODE_OAUTH_TOKEN_ENDPOINT with header Content-Type:
application/x-www-form-urlencoded and parse the response as
ClaudeCodeTokenResponse; if a helper like postForm/postUrlEncoded exists use
that, otherwise construct the fetch/axios request directly ensuring the body is
urlencoded and the correct header is set.
In `@src/core/llm/manager.ts`:
- Around line 41-53: onProviderUpdate closes over the current settings snapshot
and can overwrite concurrent updates; change the setSettings call in
onProviderUpdate to a functional update that reads the latest settings (e.g.
setSettings(prev => ({ ...prev, providers: prev.providers.map(...) }))) and
merge the provider change against prev.providers rather than the closed-over
settings, referencing onProviderUpdate, setSettings, settings, providers and
LLMProvider to locate and update the merge logic.
In `@src/types/embedding-model.types.ts`:
- Around line 24-31: The embeddingModelSchema currently allows providerType
literals (e.g., 'anthropic-plan', 'openai-plan', 'anthropic', 'xai', 'deepseek',
'perplexity', 'mistral', 'ollama', 'azure-openai', etc.) that per
PROVIDER_TYPES_INFO have supportEmbedding: false; reconcile this by updating
embeddingModelSchema to only include providerType literals that truly support
embeddings (match PROVIDER_TYPES_INFO entries with supportEmbedding: true such
as 'openai', 'gemini', 'lm-studio', 'openrouter', 'openai-compatible'), or
alternatively update PROVIDER_TYPES_INFO to set supportEmbedding: true for any
providers intentionally supporting embeddings; modify the z.object literals in
embeddingModelSchema and/or PROVIDER_TYPES_INFO so both sources agree on which
providers support embeddings.
In `@src/utils/llm/sse.ts`:
- Around line 102-114: The parseSseEvent generator should not let malformed JSON
abort the whole stream; wrap the JSON.parse call inside a try-catch within
parseSseEvent (the function currently yielding JSON.parse(data) as T) so that
parse errors are caught, logged (include the raw data and the error message),
and the function simply continues/returns without rethrowing, allowing
subsequent SSE events to be processed normally.
In `@styles.css`:
- Around line 1237-1241: The .smtcmp-plan-connect-fallback rule sets
flex-direction, align-items and gap but is missing display: flex, so those
properties have no effect; update the .smtcmp-plan-connect-fallback CSS rule to
include display: flex (or display: inline-flex if appropriate for layout) at the
top of the declaration so the flex container behavior is enabled.
🧹 Nitpick comments (20)
.gitignore (1)
23-26: LGTM! Clean gitignore updates.The
.DS_Storeentry is correctly formatted to ignore macOS metadata files in all directories, and the newCLAUDE.mdentry aligns with the Claude Code OAuth integration work in this PR.💡 Optional: Consider whether CLAUDE.md should be root-specific
If
CLAUDE.mdis only expected at the repository root, you might want to use/CLAUDE.mdinstead to avoid accidentally ignoring Claude-related documentation in subdirectories. Alternatively, if multiple variations are expected, considerCLAUDE*.md.The current pattern is fine if the intent is to ignore any file named exactly
CLAUDE.mdanywhere in the repository.src/types/llm/request.ts (1)
30-32: Consider updating the comment to cover both reasoning fields.The comment on line 30 states "Only available for OpenAI reasoning models" but only visually precedes
reasoning_effort. Sincereasoning_summaryis also specific to OpenAI reasoning models, consider updating the comment to clearly apply to both fields.📝 Suggested comment update
// Only available for OpenAI reasoning models reasoning_effort?: ReasoningEffort - reasoning_summary?: Reasoning['summary'] + reasoning_summary?: Reasoning['summary'] // Summary of model's reasoning processOr group both under the comment more explicitly:
- // Only available for OpenAI reasoning models - reasoning_effort?: ReasoningEffort - reasoning_summary?: Reasoning['summary'] + // Only available for OpenAI reasoning models + reasoning_effort?: ReasoningEffort + reasoning_summary?: Reasoning['summary']src/components/settings/modals/ConnectOpenAIPlanModal.tsx (1)
76-104: Preserve existing refresh tokens if the OAuth response omits them.OpenAI's OAuth token endpoint treats
refresh_tokenas optional per the OAuth 2.0 specification, even with theoffline_accessscope. The current code directly assignstokens.refresh_tokenwithout validation, riskingundefinedoverwrites that would break token refresh later. The check inopenaiCodexProvider.ts(if (!this.provider.oauth?.refreshToken)) will fail silently if this happens.♻️ Suggested defensive handling
- if ( - !plugin.settings.providers.find( - (p) => p.type === 'openai-plan' && p.id === OPENAI_PLAN_PROVIDER_ID, - ) - ) { - throw new Error('OpenAI Plan provider not found.') - } + const existingProvider = plugin.settings.providers.find( + (p) => p.type === 'openai-plan' && p.id === OPENAI_PLAN_PROVIDER_ID, + ) + if (!existingProvider) { + throw new Error('OpenAI Plan provider not found.') + } + const refreshToken = + tokens.refresh_token ?? existingProvider.oauth?.refreshToken + if (!refreshToken) { + throw new Error('OpenAI did not return a refresh token.') + } @@ - refreshToken: tokens.refresh_token, + refreshToken,src/core/llm/NoStainlessOpenAI.ts (1)
3-27: Make header stripping case‑insensitive for plain objects.HTTP headers are case‑insensitive, so normalizing keys avoids missing
X‑Stainless-*variants in object maps.♻️ Suggested tweak
- headers.forEach((_value, key) => { - if (key.startsWith('x-stainless')) { + headers.forEach((_value, key) => { + if (key.toLowerCase().startsWith('x-stainless')) { keysToDelete.push(key) } }) @@ - Object.keys(headerMap).forEach((key) => { - if (key.startsWith('x-stainless')) { + Object.keys(headerMap).forEach((key) => { + if (key.toLowerCase().startsWith('x-stainless')) { // eslint-disable-next-line `@typescript-eslint/no-dynamic-delete` delete headerMap[key] } })src/settings/schema/migrations/14_to_15.test.ts (1)
3-26: Test coverage could be expanded.The tests verify the essential migration behavior (version bump and plan provider addition), but consider adding coverage for:
- Chat model migration (verifying plan chat models like
claude-opus-4.5 (plan)are added)- Preservation of custom providers/models during migration
- Provider ordering after migration
💡 Example additional test case
+ it('should add plan chat models', () => { + const oldSettings = { + version: 14, + providers: [ + { type: 'anthropic', id: 'anthropic', apiKey: 'key' }, + ], + chatModels: [ + { providerType: 'anthropic', providerId: 'anthropic', id: 'claude-sonnet-4.5', model: 'claude-sonnet-4-5' }, + ], + } + + const result = migrateFrom14To15(oldSettings) + const chatModels = result.chatModels as { id: string }[] + expect(chatModels.find((m) => m.id === 'claude-opus-4.5 (plan)')).toBeDefined() + expect(chatModels.find((m) => m.id === 'gpt-5.2 (plan)')).toBeDefined() + })src/components/settings/sections/PlanConnectionsSection.tsx (2)
17-20: Type assertion may mask missing defaultProviderId.The
as stringassertion assumesdefaultProviderIdalways exists inPROVIDER_TYPES_INFOfor plan provider types. If this property is ever undefined, it would silently become"undefined"string.💡 Safer approach with runtime check
-const CLAUDE_PLAN_PROVIDER_ID = PROVIDER_TYPES_INFO['anthropic-plan'] - .defaultProviderId as string -const OPENAI_PLAN_PROVIDER_ID = PROVIDER_TYPES_INFO['openai-plan'] - .defaultProviderId as string +const CLAUDE_PLAN_PROVIDER_ID = + PROVIDER_TYPES_INFO['anthropic-plan'].defaultProviderId +const OPENAI_PLAN_PROVIDER_ID = + PROVIDER_TYPES_INFO['openai-plan'].defaultProviderId + +if (!CLAUDE_PLAN_PROVIDER_ID || !OPENAI_PLAN_PROVIDER_ID) { + throw new Error('Plan provider IDs not configured in PROVIDER_TYPES_INFO') +}
151-174: CSS class naming reuses MCP server badge styles.
PlanConnectionStatusBadgeusessmtcmp-mcp-server-status-badgeclasses which are semantically tied to MCP servers. This works but couples the component's styling to an unrelated feature.Consider creating dedicated CSS classes for plan connection badges if the styles diverge in the future, or document this intentional reuse.
src/utils/llm/sse.ts (1)
8-53: Consider consolidating duplicate stream handling logic.The WebReadableStream branch (lines 14-36) and Node.js stream branch (lines 38-52) have nearly identical buffer management and flush logic. This could be consolidated using an async iterator abstraction.
💡 Potential consolidation approach
async function* iterateStreamChunks( body: StreamSource, decoder: TextDecoder, ): AsyncIterable<string> { if (isWebReadableStream(body)) { const reader = body.getReader() while (true) { const { value, done } = await reader.read() if (value) yield decoder.decode(value, { stream: true }) if (done) break } } else { for await (const chunk of body) { yield typeof chunk === 'string' ? chunk : decoder.decode(chunk as Uint8Array) } } } export async function* parseJsonSseStream<T>(body: StreamSource): AsyncIterable<T> { const decoder = new TextDecoder() let buffer = '' for await (const chunkText of iterateStreamChunks(body, decoder)) { buffer += chunkText yield* flushSseBuffer(buffer, (next) => { buffer = next }) } yield* flushSseBuffer(buffer, (next) => { buffer = next }, true) }src/components/settings/sections/models/ChatModelSettings.tsx (1)
232-239: Type narrowing inconsistency when handlinganthropic-plan.The check on lines 232-234 accepts both
anthropicandanthropic-planprovider types, but line 239 casts the model toChatModel & { providerType: 'anthropic' }. If the model isanthropic-plan, accessingtypedModel.thinkingmay work at runtime (since both schemas have the samethinkingstructure per the relevant snippet), but the type assertion is technically incorrect.Consider widening the type to handle both variants:
Suggested type fix
- const typedModel = model as ChatModel & { providerType: 'anthropic' } + const typedModel = model as ChatModel & { providerType: 'anthropic' | 'anthropic-plan' }src/components/settings/modals/ConnectClaudePlanModal.tsx (2)
59-61: Empty catch block loses error context.The catch block silently swallows the error. Consider logging it for debugging purposes while still showing the user notice.
Proposed improvement
} catch { + console.error('Failed to initialize Claude OAuth flow:', e) new Notice('Failed to initialize OAuth flow') }Note: Update to
catch (e)to capture the error.
112-113: Consider logging the error for debugging.The catch block shows a generic message but doesn't log the actual error, making it harder to diagnose OAuth failures (e.g., network issues, invalid codes, CORS problems).
Proposed improvement
- } catch { + } catch (error) { + console.error('Claude OAuth failed:', error) new Notice('OAuth failed. Double-check the code and try again.') } finally {src/core/llm/claudeCodeAuth.ts (1)
81-93: Clarify the fragment parsing behavior.The
parseAuthorizationCodefunction handles codes containing a#fragment, which is unusual for standard OAuth authorization codes. Consider adding a comment explaining why this parsing is needed for Claude's specific OAuth implementation.src/utils/llm/httpTransport.ts (2)
39-41: Inconsistent error messages between fetch and node paths.The fetch path (line 40) only includes the status code in the error, while the node path (line 50) includes both status and response body. For consistency and better debugging, consider including the response body in fetch errors too.
Proposed improvement for postJson
if (!response.ok) { - throw new Error(`Request failed: ${response.status}`) + const errorBody = await response.text().catch(() => '') + throw new Error(`Request failed: ${response.status} ${errorBody}`) }
75-77: Same inconsistency in postStream error handling.Similar to
postJson, the fetch path error message doesn't include the response body for debugging.Proposed improvement for postStream
if (!response.ok || !response.body) { - throw new Error(`Request failed: ${response.status}`) + const errorBody = await response.text().catch(() => '') + throw new Error(`Request failed: ${response.status} ${errorBody}`) }src/core/llm/openaiCodexProvider.ts (1)
127-140: Preserve per-request reasoning overrides.
normalizeRequestalways overwritesrequest.reasoning_*with model values (orundefined), which can drop caller-provided overrides. Consider preferring request values when they exist.♻️ Proposed tweak
return { ...request, - reasoning_effort: reasoningEffort - ? (reasoningEffort as ReasoningEffort) - : undefined, - reasoning_summary: reasoningSummary - ? (reasoningSummary as Reasoning['summary']) - : undefined, + reasoning_effort: + request.reasoning_effort ?? + (reasoningEffort ? (reasoningEffort as ReasoningEffort) : undefined), + reasoning_summary: + request.reasoning_summary ?? + (reasoningSummary ? (reasoningSummary as Reasoning['summary']) : undefined), }src/types/chat-model.types.ts (1)
40-48: Usez.enum()to constrain OpenAI reasoning fields to allowed values.
Bothreasoning_effortandreasoning_summarycurrently accept any string, allowing invalid values to fail only at request time. Constrain them with enums:reasoning_effortaccepts"none","minimal","low","medium","high","xhigh"(with version-specific availability);reasoning_summaryaccepts"auto","concise","detailed". This enables early validation during schema parsing.Also applies to: 59-67 (only
reasoning_effortis present there).src/core/llm/codexAuth.ts (4)
42-43: Module-level mutable state may cause issues with concurrent OAuth flows.If a user initiates a second OAuth flow before the first completes, the shared
codexCallbackServerandisCodexCallbackStoppingvariables could lead to unexpected behavior. WhilestopCodexCallbackServer()is called before starting a new server (line 113), consider whether this is the desired UX or if concurrent flows should be explicitly blocked with a user-facing error.
79-99: Consider including response body in error messages for easier debugging.When token exchange fails, the error only includes the HTTP status code. The response body often contains useful OAuth error details (
error,error_description) that would help users and developers diagnose issues.♻️ Suggested improvement
export async function exchangeCodexCodeForTokens(params: { code: string redirectUri?: string pkceVerifier: string }): Promise<CodexTokenResponse> { const response = await fetch(`${CODEX_ISSUER}/oauth/token`, { method: 'POST', headers: { 'Content-Type': 'application/x-www-form-urlencoded' }, body: new URLSearchParams({ grant_type: 'authorization_code', code: params.code, redirect_uri: params.redirectUri ?? CODEX_REDIRECT_URI, client_id: CODEX_CLIENT_ID, code_verifier: params.pkceVerifier, }).toString(), }) if (!response.ok) { - throw new Error(`Codex token exchange failed: ${response.status}`) + const errorBody = await response.text().catch(() => '') + throw new Error(`Codex token exchange failed: ${response.status}${errorBody ? ` - ${errorBody}` : ''}`) } return (await response.json()) as CodexTokenResponse }
217-233: Same error handling improvement applies to token refresh.The refresh token flow has the same limitation as token exchange - error details from the response body are discarded.
♻️ Suggested improvement
export async function refreshCodexAccessToken( refreshToken: string, ): Promise<CodexTokenResponse> { const response = await fetch(`${CODEX_ISSUER}/oauth/token`, { method: 'POST', headers: { 'Content-Type': 'application/x-www-form-urlencoded' }, body: new URLSearchParams({ grant_type: 'refresh_token', refresh_token: refreshToken, client_id: CODEX_CLIENT_ID, }).toString(), }) if (!response.ok) { - throw new Error(`Codex token refresh failed: ${response.status}`) + const errorBody = await response.text().catch(() => '') + throw new Error(`Codex token refresh failed: ${response.status}${errorBody ? ` - ${errorBody}` : ''}`) } return (await response.json()) as CodexTokenResponse }
273-280: Minor modulo bias in random string generation.With a 66-character alphabet and bytes from 0-255, characters at indices 0-57 have a ~1.56% chance while indices 58-65 have a ~1.17% chance (256 mod 66 = 58). For PKCE verifiers this is generally acceptable, but for higher-entropy requirements, rejection sampling would eliminate bias.
- Refactored URL parameter extraction to support dynamic keys for better reusability. - Added state validation to ensure OAuth state consistency during the connection process. - Updated error messages for clearer user guidance in case of missing or mismatched OAuth states. - Introduced a new utility function for posting form data to streamline token requests. - Enhanced HTTP transport methods to support form URL encoding and improved error handling.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
README.md (2)
32-32: Fix typo: “Everytime” → “Every time.”
Minor documentation polish.✅ Suggested fix
-Everytime we ask ChatGPT, we need to put so much context information for each query. +Every time we ask ChatGPT, we need to put so much context information for each query.
58-58: Use correct brand capitalization: “YouTube.”
Minor documentation polish.✅ Suggested fix
-- **Youtube link support**: YouTube transcripts are fetched and included as context +- **YouTube link support**: YouTube transcripts are fetched and included as context
🧹 Nitpick comments (1)
README.md (1)
130-132: Soften “best performance” claim to avoid staleness.
This statement may age quickly and be disputed; consider wording like “often performs well” or “recommended based on current experience.”💡 Possible wording
-Gemini API provides the best performance among free models for Smart Composer. Recommended for users looking for a free option. +Gemini API often performs well among free options for Smart Composer. Recommended for users looking for a free option.
PR Description
Summary
Key Changes
setSettingsfor provider updates and add plan providers.openaito^6.8.1and adjust request header stripping.Checklist before requesting a review
npm run lint:checkandnpm run type:check)npm run test)Summary by CodeRabbit
New Features
Chores
Style
Docs
✏️ Tip: You can customize this high-level summary in your review settings.