Skip to content

Comments

Add plan-based Claude/OpenAI connections with OAuth#509

Merged
kevin-on merged 17 commits intomainfrom
feat/connect-subscriptions
Jan 19, 2026
Merged

Add plan-based Claude/OpenAI connections with OAuth#509
kevin-on merged 17 commits intomainfrom
feat/connect-subscriptions

Conversation

@kevin-on
Copy link
Collaborator

@kevin-on kevin-on commented Jan 19, 2026

PR Description

Summary

  • Add OAuth-based plan connections for Anthropic (Claude Code) and OpenAI (Codex) with new settings UI.
  • Introduce new provider/model types, defaults, and a v14→v15 settings migration.
  • Upgrade OpenAI SDK and normalize tool calls for adapters; add SSE parsing utilities and HTTP transport support.

Key Changes

  • New settings section and modals for plan connections; hide API key inputs for plan providers.
  • Add plan providers/models to constants, types, and schemas; extend chat model settings for OpenAI plan reasoning summary.
  • Update provider/manager wiring to pass setSettings for provider updates and add plan providers.
  • Add OAuth/auth helpers and providers for Claude Code and OpenAI Codex.
  • Add SSE parsing helpers and HTTP transport for streaming.
  • Upgrade openai to ^6.8.1 and adjust request header stripping.

Checklist before requesting a review

  • I have reviewed the guidelines for contributing to this repository.
  • I have performed a self-review of my code
  • I have performed a code linting check and type check (by running npm run lint:check and npm run type:check)
  • I have run the test suite (by running npm run test)
  • I have tested the functionality manually

Summary by CodeRabbit

  • New Features

    • Connect OpenAI Plan and Claude Plan subscriptions via OAuth; new Plan Connections settings and connect modals.
    • New subscription-only models: Claude Opus 4.5, Claude Sonnet 4.5, GPT-5.2.
    • New providers and runtime adapters to support plan-based (OAuth) providers and Codex/Claude Code integrations.
    • Enhanced model settings with reasoning/thinking controls.
  • Chores

    • Settings schema bumped to v15 with migration and tests.
    • Updated OpenAI dependency.
  • Style

    • New styles for plan connection UI.
  • Docs

    • README updated to reflect subscription vs API-provider flows.

✏️ Tip: You can customize this high-level summary in your review settings.

- Introduced OpenAI Codex as a new provider type with OAuth support.
- Implemented authentication flow including token exchange and callback server.
- Updated settings schema to include Codex provider and associated OAuth fields.
- Enhanced chat and provider management to accommodate Codex integration.
- Added migration logic to transition existing settings to support the new provider.
- Comprehensive tests added for migration and Codex functionality.
- Added comments explaining the use of Node's http/https modules for Codex endpoints due to CORS restrictions.
- Updated the nodePost function to throw an error when called on mobile platforms, ensuring proper platform handling.
- Refactored the import of IncomingMessage type for better clarity and consistency.
- Introduced the Anthropic Claude Code provider, including OAuth integration for authentication.
- Implemented the necessary authentication flow, including token exchange and authorization URL generation.
- Updated settings schema to include the new provider type and associated OAuth fields.
- Enhanced provider management and chat model handling to accommodate Claude Code integration.
- Added migration logic to transition existing settings to support the new provider.
- Comprehensive tests added for migration and Claude Code functionality.
- Upgraded OpenAI package from version 4.91.1 to 6.8.1 in package.json and package-lock.json.
- Refactored DeepSeekMessageAdapter and PerplexityMessageAdapter to normalize tool calls using the new method.
- Enhanced OpenAIMessageAdapter with a new normalizeToolCalls method to streamline tool call handling.
- Updated NoStainlessOpenAI to improve header management in requests.
- Introduced settings for the OpenAI Codex model, allowing users to configure reasoning effort and summary.
- Enhanced CodexMessageAdapter to handle reasoning summary extraction and inclusion in responses.
- Updated OpenAICodexProvider to normalize requests with reasoning parameters.
- Modified chat model schema to accommodate new reasoning fields.
- Added utility functions for reasoning summary extraction in response payloads.
…AI and Anthropic models

- Reintroduced OpenAI provider settings in the constants file for consistency.
- Updated DEFAULT_PROVIDERS and DEFAULT_CHAT_MODELS to include new models for OpenAI Codex and Anthropic Claude Code.
- Enhanced migration logic to ensure proper transition to version 15, incorporating new default providers and chat models.
…ropic models

- Replaced 'openai-codex' and 'anthropic-claude-code' with 'openai-plan' and 'anthropic-plan' in constants and provider settings.
- Updated DEFAULT_PROVIDERS and DEFAULT_CHAT_MODELS to reflect new provider types.
- Adjusted migration logic to ensure compatibility with the new provider types.
- Enhanced related components and schemas to support the updated provider structure.
…opic plans

- Added new PlanConnectionsSection to manage connections for OpenAI and Claude subscriptions.
- Introduced ConnectClaudePlanModal and ConnectOpenAIPlanModal for OAuth authentication flows.
- Enhanced styles for plan connection cards and status indicators.
- Updated SettingsTabRoot to include the new PlanConnectionsSection.
- Refactored ProvidersSection to exclude subscription providers from API key management.
@coderabbitai
Copy link

coderabbitai bot commented Jan 19, 2026

Warning

Rate limit exceeded

@kevin-on has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 11 minutes and 26 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between b6e968b and 6d54f38.

📒 Files selected for processing (1)
  • src/settings/schema/migrations/14_to_15.test.ts
📝 Walkthrough

Walkthrough

Adds plan-based LLM provider support (Claude Plan, OpenAI Codex): new OAuth PKCE flows, provider implementations and adapters, HTTP/SSE utilities, settings UI and modals, schema migration to v15, constants and types updates, plus wiring to propagate provider token refreshes into settings.

Changes

Cohort / File(s) Summary
Config & deps
\.gitignore`, `package.json``
.gitignore updated (add CLAUDE.md, ensure .DS_Store ignored); openai dependency bumped ^4.91.1 → ^6.8.1
Constants & defaults
\src/constants.ts``
New CODEX/CLAUDE_CODE constants, PLAN_PROVIDER_TYPES, new plan provider entries in PROVIDER_TYPES_INFO, updated DEFAULT_PROVIDERS and DEFAULT_CHAT_MODELS
Provider manager & wiring
\src/core/llm/manager.ts``
Added setSettings plumbing and onProviderUpdate callback; register plan providers (anthropic-plan, openai-plan) and forward setSettings to provider clients
New providers
\src/core/llm/anthropicClaudeCodeProvider.ts`, `src/core/llm/openaiCodexProvider.ts``
New OAuth-backed provider classes for anthropic-plan and openai-plan with token refresh, getAuthHeaders, and on-update callback
Auth utilities
\src/core/llm/claudeCodeAuth.ts`, `src/core/llm/codexAuth.ts``
PKCE/state generation, authorize URL builders, token exchange and refresh flows; Codex includes desktop callback server helpers
Message adapters
\src/core/llm/claudeCodeMessageAdapter.ts`, `src/core/llm/codexMessageAdapter.ts``
New adapters for Claude Code and Codex: request normalization, thinking budgets, SSE streaming parsing, non-stream/stream flows
HTTP / SSE utils
\src/utils/llm/httpTransport.ts`, `src/utils/llm/sse.ts``
postJson/postFormUrlEncoded/postStream (Node fallback) and parseJsonSseStream SSE parser
Chat wiring
\src/components/chat-view/Chat.tsx`, `src/components/chat-view/useChatStreamManager.ts`, `src/core/llm/NoStainlessOpenAI.ts``
useSettings now exposes setSettings; getChatModelClient/getProviderClient accept setSettings; NoStainlessOpenAI refactor to strip stainless headers
Settings UI & modals
\src/components/settings/SettingsTabRoot.tsx`, `src/components/settings/sections/PlanConnectionsSection.tsx`, `src/components/settings/modals/ConnectClaudePlanModal.tsx`, `src/components/settings/modals/ConnectOpenAIPlanModal.tsx``
New PlanConnectionsSection and two OAuth modals to connect/disconnect plan providers and persist tokens; modals implement PKCE flows and token exchange
Provider form & sections
\src/components/settings/modals/ProviderFormModal.tsx`, `src/components/settings/sections/ProvidersSection.tsx``
Filter plan providers out of API provider lists; hide API Key/Base URL inputs for plan types; reset additionalSettings on type change
Model settings & types
\src/components/settings/sections/models/ChatModelSettings.tsx`, `src/types/*.ts``
Add Codex model settings (reasoning fields); restructure discriminated unions to include anthropic-plan and openai-plan variants; add reasoning_summary to requests and update embedding/model/provider types
Migrations & tests
\src/settings/schema/migrations/14_to_15.ts`, `src/settings/schema/migrations/14_to_15.test.ts`, `src/settings/schema/migrations/index.ts``
New migration to v15 (add plan providers/models), bumped schema version, and unit test for migration
Adapters & small updates
\src/core/llm/anthropic.ts`, `src/core/llm/deepseekMessageAdapter.ts`, `src/core/llm/openaiMessageAdapter.ts`, `src/core/llm/perplexityMessageAdapter.ts``
Anthropic streaming moved to static generator; added protected normalizeToolCalls in OpenAI adapter; normalize tool_calls in other adapters; minor header and visibility changes
Styling & minor UI tweaks
\styles.css`, `src/components/settings/sections/McpSection.tsx``
New CSS for plan connection UI; minor icon size tweak in MCP status badge

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant Modal as ConnectClaudePlanModal
    participant OAuth as Claude OAuth
    participant TokenEP as Token Endpoint
    participant Settings as Settings Context

    User->>Modal: Click "Connect"
    Modal->>Modal: Generate PKCE & state
    Modal->>OAuth: Open authorize URL (browser)
    OAuth-->>User: Consent screen
    User->>Modal: Paste redirect URL / code
    Modal->>TokenEP: POST code + verifier
    TokenEP-->>Modal: Return accessToken, refreshToken, expiresAt
    Modal->>Settings: Persist tokens to provider config
    Settings-->>Modal: Confirm update
    Modal->>User: Show connected status
Loading
sequenceDiagram
    actor User
    participant Modal as ConnectOpenAIPlanModal
    participant CallbackSrv as Local Callback Server
    participant OAuth as OpenAI OAuth
    participant TokenEP as Token Endpoint
    participant Settings as Settings Context

    User->>Modal: Click "Login"
    Modal->>Modal: Generate PKCE & state
    Modal->>CallbackSrv: Start local callback server
    Modal->>OAuth: Open authorize URL
    OAuth-->>User: Consent screen
    User->>OAuth: Approve & redirect
    OAuth->>CallbackSrv: Redirect with code & state
    CallbackSrv->>Modal: Return authorization code
    Modal->>TokenEP: POST code + verifier
    TokenEP-->>Modal: Return tokens
    Modal->>Settings: Persist provider tokens
    Modal->>CallbackSrv: Stop server
    Modal->>User: Close modal, show connected status
Loading
sequenceDiagram
    actor User
    participant Chat as Chat Component
    participant Manager as LLM Manager
    participant Provider as Plan Provider
    participant OAuth as Token Endpoint
    participant Adapter as Message Adapter
    participant Settings as Settings Context

    User->>Chat: Send message (plan model)
    Chat->>Manager: getChatModelClient(model, settings, setSettings)
    Manager->>Provider: Instantiate with onProviderUpdate
    Provider->>Provider: getAuthHeaders()
    alt tokens expired
        Provider->>OAuth: POST refresh_token
        OAuth-->>Provider: New tokens
        Provider->>Manager: onProviderUpdate(providerId, update)
        Manager->>Settings: setSettings(updatedSettings)
    end
    Provider->>Adapter: generateResponse / streamResponse
    Adapter-->>Provider: LLM response
    Provider-->>Chat: Return response
    Chat-->>User: Render response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰 I hopped in with PKCE and a plan,
Tokens snug in a fluffy hand,
Streams that whisper, adapters that play,
Settings refreshed to save the day,
A carrot for OAuth — connected, hooray! 🎉

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 3.92% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Add plan-based Claude/OpenAI connections with OAuth' clearly and concisely describes the main objective of the PR—introducing OAuth-based plan connections for both Claude and OpenAI providers.
Description check ✅ Passed The PR description provides a comprehensive summary covering key changes, new features, and dependencies; all required checklist items are marked as completed.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/components/chat-view/useChatStreamManager.ts (1)

53-87: The setSettings parameter is correctly propagated to both code paths; however, there is a confirmed performance issue upstream.

The code in useChatStreamManager.ts is correct, but setSettings is not a stable reference. In src/ChatView.tsx (lines 78-80), setSettings is created as an inline arrow function without useCallback, causing it to be recreated on every render. This forces SettingsProvider's useMemo and all consumers (including this useMemo) to recalculate unnecessarily.

Wrap setSettings in useCallback in ChatView.tsx to stabilize the reference and prevent cascade re-renders:

Example fix
const handleSetSettings = useCallback((newSettings) => {
  this.plugin.setSettings(newSettings)
}, [])

Then pass handleSetSettings instead of an inline function.

🤖 Fix all issues with AI agents
In `@src/components/settings/modals/ConnectClaudePlanModal.tsx`:
- Around line 65-113: The connect function currently uses the raw code and never
validates OAuth state; update connect to accept a full redirect URL by parsing
query parameters (extract both "code" and "state" from the pasted input),
validate that the extracted state matches the expected state variable before
calling exchangeClaudeCodeForTokens, and if the state is missing or mismatched
show a Notice and abort; then pass the parsed code (not the raw input) plus
pkceVerifier/state into exchangeClaudeCodeForTokens and proceed to update the
provider oauth tokens as before (refer to connect, code, state, pkceVerifier,
and exchangeClaudeCodeForTokens to locate the changes).

In `@src/components/settings/modals/ConnectOpenAIPlanModal.tsx`:
- Around line 45-195: The manual flow currently only extracts the authorization
code via extractCodeFromRedirectUrl but ignores the OAuth state; update
connectWithRedirectUrl to also extract and validate the state parameter from the
redirectUrl (either extend extractCodeFromRedirectUrl to return state or add an
extractStateFromRedirectUrl helper), call ensureAuthContext() to get the
expected state (or use the stored state variable), compare the returned state to
the expected state before calling exchangeCodexCodeForTokens, and if they do not
match setManualError to a clear message and abort the flow (do not call
exchangeCodexCodeForTokens or applyTokens). Reference
extractCodeFromRedirectUrl, ensureAuthContext, connectWithRedirectUrl,
setManualError, and exchangeCodexCodeForTokens when making the change.

In `@src/components/settings/sections/PlanConnectionsSection.tsx`:
- Around line 53-63: The onConfirm callback captures the outer settings value
and can produce a stale update; change the setSettings call in onConfirm to use
a functional update so it receives the latest state (use the updater form that
accepts previous settings and returns { ...previous, providers:
previous.providers.map(...) } applying the providerId/providerType match and
clearing oauth). Update the references inside to use the updater's parameter
instead of the closed-over settings and keep the same provider matching logic in
the mapping; ensure setSettings supports functional updates before applying.

In `@src/core/llm/anthropicClaudeCodeProvider.ts`:
- Around line 86-123: The refresh logic in getAuthHeaders overwrites the stored
refresh token with undefined when the token endpoint omits refresh_token; update
the refresh handling in getAuthHeaders (inside the try block where
refreshClaudeCodeAccessToken is called) to preserve the existing refresh token
when tokens.refresh_token is undefined by setting updatedOauth.refreshToken to
tokens.refresh_token ?? this.provider.oauth.refreshToken (or equivalent), so the
stored refresh token is retained for future refreshes and only replaced when a
new refresh_token is present; keep the rest of the updatedOauth fields as-is and
ensure onProviderUpdate persists the preserved token.

In `@src/core/llm/claudeCodeAuth.ts`:
- Around line 95-108: postTokenRequest currently uses postJson to send a JSON
body to CLAUDE_CODE_OAUTH_TOKEN_ENDPOINT, but the OAuth token endpoint must use
application/x-www-form-urlencoded (UTF-8) per RFC 6749 §4.1.3; replace the JSON
POST with a form-encoded POST: convert the filtered payload to a URL-encoded
string (key=value&...), set the Content-Type header to
"application/x-www-form-urlencoded; charset=UTF-8", and send that to
CLAUDE_CODE_OAUTH_TOKEN_ENDPOINT (do this inside postTokenRequest or call a
helper that performs form-encoded POST instead of postJson) so Claude Code token
requests are encoded correctly.

In `@src/core/llm/codexAuth.ts`:
- Around line 40-44: codexCallbackServer and isCodexCallbackStopping are
module-level singletons that break under concurrent OAuth flows; replace them
with per-flow state (e.g., create a CodexCallbackSession object or factory) that
holds a Server instance and a stopping flag for each flow, or maintain a Map
keyed by a unique flowId to store {server, isStopping, timeoutHandle}; update
start/stop logic to accept/return the session/flowId, use CALLBACK_TIMEOUT_MS
from the session control, and ensure proper cleanup/clearTimeout on completion
so concurrent connect attempts do not clobber each other's state.

In `@src/core/llm/manager.ts`:
- Around line 41-53: The onProviderUpdate closure uses a stale settings
snapshot; instead ensure you base the provider merge on the most recent state:
either (A) refactor setSettings to accept a functional updater and change
onProviderUpdate to call setSettings(prev => ({ ...prev, providers:
prev.providers.map(...) })) using the targetProviderId and update, or (B) if you
cannot change setSettings now, fetch the freshest settings before writing (e.g.,
call the current settings getter or reload settings into a local latestSettings
variable) and then compute updatedProviders from latestSettings.providers before
calling setSettings({ ...latestSettings, providers: updatedProviders }); update
references: onProviderUpdate, setSettings, settings, LLMProvider.

In `@src/core/llm/openaiCodexProvider.ts`:
- Around line 82-124: In getAuthHeaders(), when handling the
refreshCodexAccessToken response, preserve the existing refresh token if
tokens.refresh_token is missing by falling back to
this.provider.oauth.refreshToken (instead of assigning undefined) when building
updatedOauth; update the construction of updatedOauth (and any accountId
fallback via extractCodexAccountId) so refreshToken uses tokens.refresh_token ??
this.provider.oauth.refreshToken to avoid forcing logins when the refresh
response omits a refresh_token.

In `@src/utils/llm/httpTransport.ts`:
- Around line 134-146: When signal.aborted is true we call request.destroy(...)
but the surrounding promise may never settle; update the abort-precheck in the
http transport so it explicitly triggers the same rejection path as the request
'error' handler: either invoke the existing error handler or directly reject the
promise with a new AbortError when you detect signal.aborted (use the same Error
message used by the abortHandler), and keep the existing cleanup
(removeEventListener) logic. Refer to the variables and handlers in this file:
signal.aborted check, request, the abortHandler, and the existing 'error' event
handler to ensure you call the same rejection/cleanup flow synchronously when
pre-aborted.

In `@src/utils/llm/sse.ts`:
- Around line 102-114: parseSseEvent currently splits rawEvent on '\n' which
misses events using lone '\r' line endings; normalize line endings before
splitting (or split using a regex like /\r\n|\n|\r/) so the subsequent .map and
.filter operate correctly; update the parseSseEvent implementation (referencing
function name parseSseEvent and related logic that trims trailing '\r' and
filters 'data:' lines) to normalize line endings first, then continue extracting
and parsing the 'data:' lines as before.
🧹 Nitpick comments (14)
src/components/settings/sections/McpSection.tsx (1)

238-260: Inconsistent icon size for Disconnected status.

The CircleMinus icon uses size 14 while all other status icons (Check, Loader2, X) use size 16. This creates visual inconsistency. If intentional for optical balance, consider adding a comment explaining the rationale.

♻️ Suggested fix for consistency
     [McpServerStatus.Disconnected]: {
-      icon: <CircleMinus size={14} />,
+      icon: <CircleMinus size={16} />,
       label: 'Disconnected',
       statusClass: 'smtcmp-mcp-server-status-badge--disconnected',
     },
src/types/embedding-model.types.ts (2)

22-22: Consider tracking the TODO for provider list de-duplication.

The TODO comment highlights a valid architectural concern. Consider opening an issue to track this technical debt and prevent it from growing as more providers are added.

Would you like me to help draft an issue to track this refactoring task?


22-31: Remove unused anthropic-plan and openai-plan from embedding model schema, or implement actual embedding support.

The embedding schema includes anthropic-plan and openai-plan providers (lines 25–31), but they are never instantiated in DEFAULT_EMBEDDING_MODELS—only openai, gemini, and ollama are used for embeddings. Plan providers appear exclusively in chat models. Remove these unused schema entries or implement proper embedding support if these providers will eventually support embeddings.

src/utils/llm/httpTransport.ts (2)

92-150: Consider adding a request timeout.

Network requests without timeouts can hang indefinitely if the server doesn't respond. Consider adding an optional timeout parameter or a reasonable default timeout to prevent resource leaks.

♻️ Timeout implementation sketch
async function nodePost(
  endpoint: string,
  body: string,
  headers?: Record<string, string>,
  signal?: AbortSignal,
  timeoutMs: number = 30000, // 30s default
): Promise<IncomingMessage> {
  // ... existing code ...
  
  return new Promise((resolve, reject) => {
    const request = client.request(/* ... */)
    
    const timeoutId = setTimeout(() => {
      request.destroy(new Error('Request timed out'))
      reject(new Error('Request timed out'))
    }, timeoutMs)
    
    request.on('error', (error) => {
      clearTimeout(timeoutId)
      reject(error)
    })
    
    // Clear timeout on successful response
    // ... rest of implementation
  })
}

39-41: Error messages lack response body context in fetchFn paths.

When using the fetchFn path, error messages only include the status code but not the response body, making debugging harder. The nodePost path includes the body in errors (line 50, 86).

♻️ Include response body in error messages
     if (!response.ok) {
-      throw new Error(`Request failed: ${response.status}`)
+      const errorBody = await response.text().catch(() => '')
+      throw new Error(`Request failed: ${response.status} ${errorBody}`)
     }

Also applies to: 75-77

src/utils/llm/sse.ts (1)

61-81: Consider optimizing boundary detection for performance.

The current implementation searches for all three boundary types on every call, which is fine for typical SSE payloads. However, for high-throughput streams, you could short-circuit after finding the first boundary if it's at index 0, or track the expected boundary type after the first detection.

src/core/llm/codexAuth.ts (2)

101-207: Callback server implementation is solid but consider adding connection timeout.

The server handles the happy path and error cases well. However, individual HTTP connections don't have a timeout—a slow client could hold the connection indefinitely. Node's server.setTimeout() could be useful here.

Additionally, consider destroying active connections in finalize to ensure prompt cleanup:

🔧 Suggested enhancement
     server.on('error', (error) => {
       finalize(
         error instanceof Error
           ? error
           : new Error('OAuth callback server error'),
       )
     })

+    server.setTimeout(30000) // 30s connection timeout
+
     server.listen(port, hostname, () => {
       codexCallbackServer = server
     })

273-280: Minor bias in random string generation.

Using b % chars.length introduces a slight bias since 256 is not evenly divisible by 66 (the character set length). For cryptographic purposes this is negligible, but you could use rejection sampling for a more uniform distribution.

src/core/llm/claudeCodeAuth.ts (2)

81-93: Clarify the code#state parsing logic.

The parseAuthorizationCode function splits on # to extract state from the code. This appears to handle a specific edge case where state is appended to the authorization code. Consider adding a comment explaining when this format occurs to aid future maintainers.

+/**
+ * Parses authorization code that may contain state appended after a '#' fragment.
+ * This handles the case where the OAuth callback includes state in the code parameter.
+ */
 function parseAuthorizationCode(
   code: string,
   state?: string,
 ): { code: string; state?: string } {

110-126: Code duplication: generateRandomString and base64UrlEncode are identical to codexAuth.ts.

These utility functions are duplicated between codexAuth.ts and claudeCodeAuth.ts. Consider extracting them to a shared module (e.g., src/utils/crypto.ts or src/utils/oauth.ts).

src/settings/schema/migrations/14_to_15.test.ts (1)

3-27: Tests cover basic cases but consider adding edge case coverage.

The tests verify version increment and provider addition, which is good. Consider adding tests for:

  1. Migration when anthropic-plan or openai-plan providers already exist (idempotency)
  2. Migration with empty providers array
  3. Preservation of existing provider data (API keys, other settings)
💡 Additional test cases to consider
it('should preserve existing providers and their data', () => {
  const oldSettings = {
    version: 14,
    providers: [
      { type: 'anthropic', id: 'anthropic', apiKey: 'test-key' },
    ],
    chatModels: [],
  }

  const result = migrateFrom14To15(oldSettings)
  const providers = result.providers as { type: string; id: string; apiKey?: string }[]
  const anthropic = providers.find((p) => p.type === 'anthropic')
  expect(anthropic?.apiKey).toBe('test-key')
})

it('should not duplicate plan providers if already present', () => {
  const oldSettings = {
    version: 14,
    providers: [
      { type: 'anthropic-plan', id: 'anthropic-plan' },
    ],
    chatModels: [],
  }

  const result = migrateFrom14To15(oldSettings)
  const providers = result.providers as { type: string }[]
  const planProviders = providers.filter((p) => p.type === 'anthropic-plan')
  expect(planProviders.length).toBe(1)
})
src/components/settings/sections/PlanConnectionsSection.tsx (1)

151-173: Consider using dedicated CSS class names for plan connection badges.

The badge component reuses MCP server status badge CSS classes (smtcmp-mcp-server-status-badge, smtcmp-mcp-server-status-badge--connected). While this works for styling consistency, it may cause confusion during maintenance. Consider either extracting shared badge styles to a generic class or creating dedicated plan connection badge classes.

src/types/provider.types.ts (1)

17-41: Consider extracting the shared OAuth schema to reduce duplication.

The oauth object schema is identical between anthropic-plan and openai-plan. Extracting it to a reusable schema would improve maintainability.

Suggested refactor
+const oauthSchema = z.object({
+  accessToken: z.string(),
+  refreshToken: z.string(),
+  expiresAt: z.number(),
+  accountId: z.string().optional(),
+})
+
 export const llmProviderSchema = z.discriminatedUnion('type', [
   z.object({
     type: z.literal('anthropic-plan'),
     ...baseLlmProviderSchema.shape,
-    oauth: z
-      .object({
-        accessToken: z.string(),
-        refreshToken: z.string(),
-        expiresAt: z.number(),
-        accountId: z.string().optional(),
-      })
-      .optional(),
+    oauth: oauthSchema.optional(),
   }),
   z.object({
     type: z.literal('openai-plan'),
     ...baseLlmProviderSchema.shape,
-    oauth: z
-      .object({
-        accessToken: z.string(),
-        refreshToken: z.string(),
-        expiresAt: z.number(),
-        accountId: z.string().optional(),
-      })
-      .optional(),
+    oauth: oauthSchema.optional(),
   }),
src/components/settings/sections/models/ChatModelSettings.tsx (1)

232-234: Widen typedModel for anthropic-plan to avoid a misleading cast.

Since the check now includes 'anthropic-plan', reflect that in the type for clarity and safety.

♻️ Suggested tweak
-      const typedModel = model as ChatModel & { providerType: 'anthropic' }
+      const typedModel = model as ChatModel & {
+        providerType: 'anthropic' | 'anthropic-plan'
+      }

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/components/settings/sections/ProvidersSection.tsx (1)

124-131: Minor text inconsistency: "Set API key" vs "Credential" header.

The column header was changed to "Credential" (line 115), but the placeholder text on line 130 still says "Set API key". Consider updating for consistency.

📝 Suggested fix
                 <td
                   className="smtcmp-settings-table-api-key"
                   onClick={() => {
                     new EditProviderModal(app, plugin, provider).open()
                   }}
                 >
-                  {provider.apiKey ? '••••••••' : 'Set API key'}
+                  {provider.apiKey ? '••••••••' : 'Set credential'}
                 </td>
🤖 Fix all issues with AI agents
In `@package.json`:
- Line 72: The package bump to "openai": "^6.8.1" breaks all v4-style usage
(e.g., client.chat.completions.create(), ChatCompletion/ChatCompletionChunk
types, streaming deltas and tool/usage fields); either revert the dependency to
a v4.x release in package.json or migrate all OpenAI calls to the new Responses
API: replace client.chat.completions.create() usages with
client.responses.create(), update handling of response objects (replace
ChatCompletion/ChatCompletionChunk types), refactor streaming logic to the
event-driven Responses API format, and adjust access to function/tool outputs
and usage fields per the Responses API migration guide.

In `@src/constants.ts`:
- Around line 10-16: Replace the undocumented ChatGPT backend URL in the
CODEX_RESPONSES_ENDPOINT constant with the official OpenAI Responses API; update
the value of CODEX_RESPONSES_ENDPOINT to use
"https://api.openai.com/v1/responses" so the code uses the public, documented
endpoint referenced by the CODEX_RESPONSES_ENDPOINT symbol.

In `@src/core/llm/claudeCodeAuth.ts`:
- Around line 104-107: The token request currently uses postJson which sends
application/json; replace the
postJson<ClaudeCodeTokenResponse>(CLAUDE_CODE_OAUTH_TOKEN_ENDPOINT, payload)
call in claudeCodeAuth.ts with a form-urlencoded POST: serialize the payload
with URLSearchParams (or equivalent), send it to
CLAUDE_CODE_OAUTH_TOKEN_ENDPOINT with header Content-Type:
application/x-www-form-urlencoded and parse the response as
ClaudeCodeTokenResponse; if a helper like postForm/postUrlEncoded exists use
that, otherwise construct the fetch/axios request directly ensuring the body is
urlencoded and the correct header is set.

In `@src/core/llm/manager.ts`:
- Around line 41-53: onProviderUpdate closes over the current settings snapshot
and can overwrite concurrent updates; change the setSettings call in
onProviderUpdate to a functional update that reads the latest settings (e.g.
setSettings(prev => ({ ...prev, providers: prev.providers.map(...) }))) and
merge the provider change against prev.providers rather than the closed-over
settings, referencing onProviderUpdate, setSettings, settings, providers and
LLMProvider to locate and update the merge logic.

In `@src/types/embedding-model.types.ts`:
- Around line 24-31: The embeddingModelSchema currently allows providerType
literals (e.g., 'anthropic-plan', 'openai-plan', 'anthropic', 'xai', 'deepseek',
'perplexity', 'mistral', 'ollama', 'azure-openai', etc.) that per
PROVIDER_TYPES_INFO have supportEmbedding: false; reconcile this by updating
embeddingModelSchema to only include providerType literals that truly support
embeddings (match PROVIDER_TYPES_INFO entries with supportEmbedding: true such
as 'openai', 'gemini', 'lm-studio', 'openrouter', 'openai-compatible'), or
alternatively update PROVIDER_TYPES_INFO to set supportEmbedding: true for any
providers intentionally supporting embeddings; modify the z.object literals in
embeddingModelSchema and/or PROVIDER_TYPES_INFO so both sources agree on which
providers support embeddings.

In `@src/utils/llm/sse.ts`:
- Around line 102-114: The parseSseEvent generator should not let malformed JSON
abort the whole stream; wrap the JSON.parse call inside a try-catch within
parseSseEvent (the function currently yielding JSON.parse(data) as T) so that
parse errors are caught, logged (include the raw data and the error message),
and the function simply continues/returns without rethrowing, allowing
subsequent SSE events to be processed normally.

In `@styles.css`:
- Around line 1237-1241: The .smtcmp-plan-connect-fallback rule sets
flex-direction, align-items and gap but is missing display: flex, so those
properties have no effect; update the .smtcmp-plan-connect-fallback CSS rule to
include display: flex (or display: inline-flex if appropriate for layout) at the
top of the declaration so the flex container behavior is enabled.
🧹 Nitpick comments (20)
.gitignore (1)

23-26: LGTM! Clean gitignore updates.

The .DS_Store entry is correctly formatted to ignore macOS metadata files in all directories, and the new CLAUDE.md entry aligns with the Claude Code OAuth integration work in this PR.

💡 Optional: Consider whether CLAUDE.md should be root-specific

If CLAUDE.md is only expected at the repository root, you might want to use /CLAUDE.md instead to avoid accidentally ignoring Claude-related documentation in subdirectories. Alternatively, if multiple variations are expected, consider CLAUDE*.md.

The current pattern is fine if the intent is to ignore any file named exactly CLAUDE.md anywhere in the repository.

src/types/llm/request.ts (1)

30-32: Consider updating the comment to cover both reasoning fields.

The comment on line 30 states "Only available for OpenAI reasoning models" but only visually precedes reasoning_effort. Since reasoning_summary is also specific to OpenAI reasoning models, consider updating the comment to clearly apply to both fields.

📝 Suggested comment update
   // Only available for OpenAI reasoning models
   reasoning_effort?: ReasoningEffort
-  reasoning_summary?: Reasoning['summary']
+  reasoning_summary?: Reasoning['summary'] // Summary of model's reasoning process

Or group both under the comment more explicitly:

-  // Only available for OpenAI reasoning models
-  reasoning_effort?: ReasoningEffort
-  reasoning_summary?: Reasoning['summary']
+  // Only available for OpenAI reasoning models
+  reasoning_effort?: ReasoningEffort
+  reasoning_summary?: Reasoning['summary']
src/components/settings/modals/ConnectOpenAIPlanModal.tsx (1)

76-104: Preserve existing refresh tokens if the OAuth response omits them.

OpenAI's OAuth token endpoint treats refresh_token as optional per the OAuth 2.0 specification, even with the offline_access scope. The current code directly assigns tokens.refresh_token without validation, risking undefined overwrites that would break token refresh later. The check in openaiCodexProvider.ts (if (!this.provider.oauth?.refreshToken)) will fail silently if this happens.

♻️ Suggested defensive handling
-    if (
-      !plugin.settings.providers.find(
-        (p) => p.type === 'openai-plan' && p.id === OPENAI_PLAN_PROVIDER_ID,
-      )
-    ) {
-      throw new Error('OpenAI Plan provider not found.')
-    }
+    const existingProvider = plugin.settings.providers.find(
+      (p) => p.type === 'openai-plan' && p.id === OPENAI_PLAN_PROVIDER_ID,
+    )
+    if (!existingProvider) {
+      throw new Error('OpenAI Plan provider not found.')
+    }
+    const refreshToken =
+      tokens.refresh_token ?? existingProvider.oauth?.refreshToken
+    if (!refreshToken) {
+      throw new Error('OpenAI did not return a refresh token.')
+    }
@@
-              refreshToken: tokens.refresh_token,
+              refreshToken,
src/core/llm/NoStainlessOpenAI.ts (1)

3-27: Make header stripping case‑insensitive for plain objects.

HTTP headers are case‑insensitive, so normalizing keys avoids missing X‑Stainless-* variants in object maps.

♻️ Suggested tweak
-    headers.forEach((_value, key) => {
-      if (key.startsWith('x-stainless')) {
+    headers.forEach((_value, key) => {
+      if (key.toLowerCase().startsWith('x-stainless')) {
         keysToDelete.push(key)
       }
     })
@@
-  Object.keys(headerMap).forEach((key) => {
-    if (key.startsWith('x-stainless')) {
+  Object.keys(headerMap).forEach((key) => {
+    if (key.toLowerCase().startsWith('x-stainless')) {
       // eslint-disable-next-line `@typescript-eslint/no-dynamic-delete`
       delete headerMap[key]
     }
   })
src/settings/schema/migrations/14_to_15.test.ts (1)

3-26: Test coverage could be expanded.

The tests verify the essential migration behavior (version bump and plan provider addition), but consider adding coverage for:

  1. Chat model migration (verifying plan chat models like claude-opus-4.5 (plan) are added)
  2. Preservation of custom providers/models during migration
  3. Provider ordering after migration
💡 Example additional test case
+  it('should add plan chat models', () => {
+    const oldSettings = {
+      version: 14,
+      providers: [
+        { type: 'anthropic', id: 'anthropic', apiKey: 'key' },
+      ],
+      chatModels: [
+        { providerType: 'anthropic', providerId: 'anthropic', id: 'claude-sonnet-4.5', model: 'claude-sonnet-4-5' },
+      ],
+    }
+
+    const result = migrateFrom14To15(oldSettings)
+    const chatModels = result.chatModels as { id: string }[]
+    expect(chatModels.find((m) => m.id === 'claude-opus-4.5 (plan)')).toBeDefined()
+    expect(chatModels.find((m) => m.id === 'gpt-5.2 (plan)')).toBeDefined()
+  })
src/components/settings/sections/PlanConnectionsSection.tsx (2)

17-20: Type assertion may mask missing defaultProviderId.

The as string assertion assumes defaultProviderId always exists in PROVIDER_TYPES_INFO for plan provider types. If this property is ever undefined, it would silently become "undefined" string.

💡 Safer approach with runtime check
-const CLAUDE_PLAN_PROVIDER_ID = PROVIDER_TYPES_INFO['anthropic-plan']
-  .defaultProviderId as string
-const OPENAI_PLAN_PROVIDER_ID = PROVIDER_TYPES_INFO['openai-plan']
-  .defaultProviderId as string
+const CLAUDE_PLAN_PROVIDER_ID =
+  PROVIDER_TYPES_INFO['anthropic-plan'].defaultProviderId
+const OPENAI_PLAN_PROVIDER_ID =
+  PROVIDER_TYPES_INFO['openai-plan'].defaultProviderId
+
+if (!CLAUDE_PLAN_PROVIDER_ID || !OPENAI_PLAN_PROVIDER_ID) {
+  throw new Error('Plan provider IDs not configured in PROVIDER_TYPES_INFO')
+}

151-174: CSS class naming reuses MCP server badge styles.

PlanConnectionStatusBadge uses smtcmp-mcp-server-status-badge classes which are semantically tied to MCP servers. This works but couples the component's styling to an unrelated feature.

Consider creating dedicated CSS classes for plan connection badges if the styles diverge in the future, or document this intentional reuse.

src/utils/llm/sse.ts (1)

8-53: Consider consolidating duplicate stream handling logic.

The WebReadableStream branch (lines 14-36) and Node.js stream branch (lines 38-52) have nearly identical buffer management and flush logic. This could be consolidated using an async iterator abstraction.

💡 Potential consolidation approach
async function* iterateStreamChunks(
  body: StreamSource,
  decoder: TextDecoder,
): AsyncIterable<string> {
  if (isWebReadableStream(body)) {
    const reader = body.getReader()
    while (true) {
      const { value, done } = await reader.read()
      if (value) yield decoder.decode(value, { stream: true })
      if (done) break
    }
  } else {
    for await (const chunk of body) {
      yield typeof chunk === 'string' ? chunk : decoder.decode(chunk as Uint8Array)
    }
  }
}

export async function* parseJsonSseStream<T>(body: StreamSource): AsyncIterable<T> {
  const decoder = new TextDecoder()
  let buffer = ''
  
  for await (const chunkText of iterateStreamChunks(body, decoder)) {
    buffer += chunkText
    yield* flushSseBuffer(buffer, (next) => { buffer = next })
  }
  yield* flushSseBuffer(buffer, (next) => { buffer = next }, true)
}
src/components/settings/sections/models/ChatModelSettings.tsx (1)

232-239: Type narrowing inconsistency when handling anthropic-plan.

The check on lines 232-234 accepts both anthropic and anthropic-plan provider types, but line 239 casts the model to ChatModel & { providerType: 'anthropic' }. If the model is anthropic-plan, accessing typedModel.thinking may work at runtime (since both schemas have the same thinking structure per the relevant snippet), but the type assertion is technically incorrect.

Consider widening the type to handle both variants:

Suggested type fix
-      const typedModel = model as ChatModel & { providerType: 'anthropic' }
+      const typedModel = model as ChatModel & { providerType: 'anthropic' | 'anthropic-plan' }
src/components/settings/modals/ConnectClaudePlanModal.tsx (2)

59-61: Empty catch block loses error context.

The catch block silently swallows the error. Consider logging it for debugging purposes while still showing the user notice.

Proposed improvement
       } catch {
+        console.error('Failed to initialize Claude OAuth flow:', e)
         new Notice('Failed to initialize OAuth flow')
       }

Note: Update to catch (e) to capture the error.


112-113: Consider logging the error for debugging.

The catch block shows a generic message but doesn't log the actual error, making it harder to diagnose OAuth failures (e.g., network issues, invalid codes, CORS problems).

Proposed improvement
-    } catch {
+    } catch (error) {
+      console.error('Claude OAuth failed:', error)
       new Notice('OAuth failed. Double-check the code and try again.')
     } finally {
src/core/llm/claudeCodeAuth.ts (1)

81-93: Clarify the fragment parsing behavior.

The parseAuthorizationCode function handles codes containing a # fragment, which is unusual for standard OAuth authorization codes. Consider adding a comment explaining why this parsing is needed for Claude's specific OAuth implementation.

src/utils/llm/httpTransport.ts (2)

39-41: Inconsistent error messages between fetch and node paths.

The fetch path (line 40) only includes the status code in the error, while the node path (line 50) includes both status and response body. For consistency and better debugging, consider including the response body in fetch errors too.

Proposed improvement for postJson
     if (!response.ok) {
-      throw new Error(`Request failed: ${response.status}`)
+      const errorBody = await response.text().catch(() => '')
+      throw new Error(`Request failed: ${response.status} ${errorBody}`)
     }

75-77: Same inconsistency in postStream error handling.

Similar to postJson, the fetch path error message doesn't include the response body for debugging.

Proposed improvement for postStream
     if (!response.ok || !response.body) {
-      throw new Error(`Request failed: ${response.status}`)
+      const errorBody = await response.text().catch(() => '')
+      throw new Error(`Request failed: ${response.status} ${errorBody}`)
     }
src/core/llm/openaiCodexProvider.ts (1)

127-140: Preserve per-request reasoning overrides.
normalizeRequest always overwrites request.reasoning_* with model values (or undefined), which can drop caller-provided overrides. Consider preferring request values when they exist.

♻️ Proposed tweak
   return {
     ...request,
-    reasoning_effort: reasoningEffort
-      ? (reasoningEffort as ReasoningEffort)
-      : undefined,
-    reasoning_summary: reasoningSummary
-      ? (reasoningSummary as Reasoning['summary'])
-      : undefined,
+    reasoning_effort:
+      request.reasoning_effort ??
+      (reasoningEffort ? (reasoningEffort as ReasoningEffort) : undefined),
+    reasoning_summary:
+      request.reasoning_summary ??
+      (reasoningSummary ? (reasoningSummary as Reasoning['summary']) : undefined),
   }
src/types/chat-model.types.ts (1)

40-48: Use z.enum() to constrain OpenAI reasoning fields to allowed values.
Both reasoning_effort and reasoning_summary currently accept any string, allowing invalid values to fail only at request time. Constrain them with enums: reasoning_effort accepts "none", "minimal", "low", "medium", "high", "xhigh" (with version-specific availability); reasoning_summary accepts "auto", "concise", "detailed". This enables early validation during schema parsing.

Also applies to: 59-67 (only reasoning_effort is present there).

src/core/llm/codexAuth.ts (4)

42-43: Module-level mutable state may cause issues with concurrent OAuth flows.

If a user initiates a second OAuth flow before the first completes, the shared codexCallbackServer and isCodexCallbackStopping variables could lead to unexpected behavior. While stopCodexCallbackServer() is called before starting a new server (line 113), consider whether this is the desired UX or if concurrent flows should be explicitly blocked with a user-facing error.


79-99: Consider including response body in error messages for easier debugging.

When token exchange fails, the error only includes the HTTP status code. The response body often contains useful OAuth error details (error, error_description) that would help users and developers diagnose issues.

♻️ Suggested improvement
 export async function exchangeCodexCodeForTokens(params: {
   code: string
   redirectUri?: string
   pkceVerifier: string
 }): Promise<CodexTokenResponse> {
   const response = await fetch(`${CODEX_ISSUER}/oauth/token`, {
     method: 'POST',
     headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
     body: new URLSearchParams({
       grant_type: 'authorization_code',
       code: params.code,
       redirect_uri: params.redirectUri ?? CODEX_REDIRECT_URI,
       client_id: CODEX_CLIENT_ID,
       code_verifier: params.pkceVerifier,
     }).toString(),
   })
   if (!response.ok) {
-    throw new Error(`Codex token exchange failed: ${response.status}`)
+    const errorBody = await response.text().catch(() => '')
+    throw new Error(`Codex token exchange failed: ${response.status}${errorBody ? ` - ${errorBody}` : ''}`)
   }
   return (await response.json()) as CodexTokenResponse
 }

217-233: Same error handling improvement applies to token refresh.

The refresh token flow has the same limitation as token exchange - error details from the response body are discarded.

♻️ Suggested improvement
 export async function refreshCodexAccessToken(
   refreshToken: string,
 ): Promise<CodexTokenResponse> {
   const response = await fetch(`${CODEX_ISSUER}/oauth/token`, {
     method: 'POST',
     headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
     body: new URLSearchParams({
       grant_type: 'refresh_token',
       refresh_token: refreshToken,
       client_id: CODEX_CLIENT_ID,
     }).toString(),
   })
   if (!response.ok) {
-    throw new Error(`Codex token refresh failed: ${response.status}`)
+    const errorBody = await response.text().catch(() => '')
+    throw new Error(`Codex token refresh failed: ${response.status}${errorBody ? ` - ${errorBody}` : ''}`)
   }
   return (await response.json()) as CodexTokenResponse
 }

273-280: Minor modulo bias in random string generation.

With a 66-character alphabet and bytes from 0-255, characters at indices 0-57 have a ~1.56% chance while indices 58-65 have a ~1.17% chance (256 mod 66 = 58). For PKCE verifiers this is generally acceptable, but for higher-entropy requirements, rejection sampling would eliminate bias.

- Refactored URL parameter extraction to support dynamic keys for better reusability.
- Added state validation to ensure OAuth state consistency during the connection process.
- Updated error messages for clearer user guidance in case of missing or mismatched OAuth states.
- Introduced a new utility function for posting form data to streamline token requests.
- Enhanced HTTP transport methods to support form URL encoding and improved error handling.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
README.md (2)

32-32: Fix typo: “Everytime” → “Every time.”
Minor documentation polish.

✅ Suggested fix
-Everytime we ask ChatGPT, we need to put so much context information for each query.
+Every time we ask ChatGPT, we need to put so much context information for each query.

58-58: Use correct brand capitalization: “YouTube.”
Minor documentation polish.

✅ Suggested fix
-- **Youtube link support**: YouTube transcripts are fetched and included as context
+- **YouTube link support**: YouTube transcripts are fetched and included as context
🧹 Nitpick comments (1)
README.md (1)

130-132: Soften “best performance” claim to avoid staleness.
This statement may age quickly and be disputed; consider wording like “often performs well” or “recommended based on current experience.”

💡 Possible wording
-Gemini API provides the best performance among free models for Smart Composer. Recommended for users looking for a free option.
+Gemini API often performs well among free options for Smart Composer. Recommended for users looking for a free option.

@kevin-on kevin-on merged commit 7862770 into main Jan 19, 2026
2 checks passed
@kevin-on kevin-on deleted the feat/connect-subscriptions branch January 19, 2026 06:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant