Skip to content

feat: add retry logic to MCP client connection establishment and tool retrieval#1568

Merged
Pratham-Mishra04 merged 1 commit intomainfrom
02-06-feat_added_retries_for_mcp_connection
Feb 10, 2026
Merged

feat: add retry logic to MCP client connection establishment and tool retrieval#1568
Pratham-Mishra04 merged 1 commit intomainfrom
02-06-feat_added_retries_for_mcp_connection

Conversation

@Pratham-Mishra04
Copy link
Collaborator

Add retry logic to MCP client connections and tool retrieval

Adds robust exponential backoff retry logic to MCP client connections and tool retrieval operations. This improves resilience against transient network failures and temporary service unavailability.

Changes

  • Implemented exponential backoff retry logic (5 retries, 1-30 seconds) for MCP client connections
  • Added retry support for connection establishment, transport start, client initialization, and tool retrieval
  • Created intelligent error classification to distinguish between transient errors (network issues, timeouts) and permanent errors (auth failures, config errors)
  • Added a NoOpLogger implementation for cases where logging is not needed
  • Updated documentation with detailed explanation of retry behavior and connection resilience
  • Extended connection timeout from 30 to 60 seconds to accommodate retry attempts

Type of change

  • Bug fix
  • Feature
  • Refactor
  • Documentation
  • Chore/CI

Affected areas

  • Core (Go)
  • Transports (HTTP)
  • Providers/Integrations
  • Plugins
  • UI (Next.js)
  • Docs

How to test

Test MCP client connections with intermittent network failures:

# Start Bifrost
go run cmd/bifrost/main.go

# Connect to an MCP client that may have network issues
curl -X POST http://localhost:8080/api/mcp/client -d '{
  "name": "test-client",
  "connectionType": "http",
  "endpoint": "http://flaky-service:8080"
}'

# Observe retry logs in console output
# Verify client eventually connects or fails after max retries

Breaking changes

  • No

Security considerations

No additional security implications. The retry logic only applies to already authenticated connections and doesn't bypass any security checks.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 6, 2026

🧪 Test Suite Available

This PR can be tested by a repository admin.

Run tests for PR #1568

Copy link
Collaborator Author

Pratham-Mishra04 commented Feb 6, 2026

This stack of pull requests is managed by Graphite. Learn more about stacking.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 6, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds exponential backoff retry logic for MCP client connection, transport start, initialization, and tool retrieval; increases connection establishment timeout. Removes explicit logger parameters from provider ListModels flows in favor of internal logger access. Adds NoOpLogger, updates model-catalog pricing reload/sync, and related docs and changelogs.

Changes

Cohort / File(s) Summary
MCP connection & retry utilities
core/mcp/clientmanager.go, core/mcp/utils.go, core/mcp/mcp.go
Implements exponential backoff retry wrapper (RetryConfig, ExecuteWithRetry, isTransientError), per-attempt transport/external-client creation and cleanup, per-attempt Start/Initialize timeouts, and raises MCPClientConnectionEstablishTimeout from 30s to 60s.
MCP health & executor tweaks
core/mcp/healthmonitor.go, core/mcp/codemode/starlark/executecode.go
Health monitor now stops the monitor when the client entry is missing and documents reconnect intent; removed an unexported local toolBinding type in starlark executor.
Provider logging refactor
core/providers/...
core/providers/openai/openai.go, core/providers/anthropic/anthropic.go, core/providers/azure/azure.go, core/providers/bedrock/bedrock.go, core/providers/cerebras/cerebras.go, core/providers/cohere/cohere.go, core/providers/elevenlabs/elevenlabs.go, core/providers/gemini/gemini.go, core/providers/groq/groq.go, core/providers/huggingface/huggingface.go, core/providers/mistral/mistral.go, core/providers/nebius/nebius.go, core/providers/ollama/ollama.go, core/providers/openrouter/openrouter.go, core/providers/parasail/parasail.go, core/providers/sgl/sgl.go, core/providers/vertex/vertex.go, core/providers/xai/xai.go, core/providers/utils/utils.go
Removes logger parameters from multi-key/list-models helper calls and downstream handlers; updates signatures (e.g., HandleOpenAIListModelsRequest, HandleMultipleListModelsRequests, extractSuccessfulListModelsResponses) to obtain loggers internally.
Logging utilities
core/logger.go
Adds NoOpLogger type with no-op implementations and NewNoOpLogger() constructor.
Model catalog & pricing sync
framework/modelcatalog/main.go, transports/bifrost-http/server/server.go, transports/bifrost-http/lib/config.go
Adds ResetModelPool() and repopulation of in-memory model pool after ForceReloadPricing; server ForceReloadPricing now initializes missing catalog, synchronizes models, and logs detailed errors. Config init assigns ModelCatalog only on successful init.
Docs & changelogs
docs/mcp/connecting-to-servers.mdx, docs/mcp/overview.mdx, core/changelog.md, transports/changelog.md
Adds "Connection Resilience and Retry Logic" documentation, updates overview and changelogs to mention retry logic and related fixes.
Minor formatting
core/bifrost.go, core/changelog.md
Import formatting/newline in core/bifrost.go; appended changelog entry in core/changelog.md.

Sequence Diagram

sequenceDiagram
    participant CM as ClientManager
    participant RW as RetryWrapper (ExecuteWithRetry)
    participant EC as ExternalClient
    participant TR as Transport
    participant INIT as Initialize()

    CM->>RW: connectToMCPClient(ctx)
    RW->>EC: create fresh ExternalClient (per attempt)
    RW->>TR: create Transport(using EC)
    RW->>TR: Start(ctx with per-attempt timeout)
    alt Start succeeds
        TR-->>RW: started
        RW->>INIT: Initialize(ctx with per-attempt timeout)
        alt Initialize succeeds
            INIT-->>RW: initialized
            RW-->>CM: return connected EC
        else Initialize transient error
            INIT-->>RW: transient error
            RW->>EC: Close() & cleanup
            RW->>RW: wait exponential backoff then retry
        else Initialize permanent error
            INIT-->>RW: permanent error
            RW->>EC: Close() & cleanup
            RW-->>CM: return error
        end
    else Start transient error
        TR-->>RW: transient error
        RW->>EC: Close() & cleanup
        RW->>RW: wait exponential backoff then retry
    else Start permanent error
        TR-->>RW: permanent error
        RW->>EC: Close() & cleanup
        RW-->>CM: return error
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 I nibble at retries, hop by hop,

Fresh clients spring up, old ones stop,
Timeouts lengthen, backoffs unfold,
Connections rekindled, brave and bold,
No-op logs hum softly while systems hop.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The PR title accurately summarizes the main feature: adding retry logic to MCP client connections and tool retrieval, which is the primary focus across multiple modified files in the changeset.
Description check ✅ Passed The PR description covers most required template sections including summary, changes, type of change, affected areas, testing instructions, breaking changes, and security considerations. However, related issues section is missing.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 02-06-feat_added_retries_for_mcp_connection

Comment @coderabbitai help to get the list of available commands and usage tips.

@Pratham-Mishra04 Pratham-Mishra04 mentioned this pull request Feb 6, 2026
18 tasks
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In `@core/logger.go`:
- Around line 124-138: NoOpLogger.Fatal currently is a no-op which violates the
schemas.Logger contract that Fatal must terminate the program; update
NoOpLogger.Fatal to honor that contract (e.g., log the message then terminate
with os.Exit(1) or panic) so callers relying on termination behave correctly;
modify the implementation in the NoOpLogger type (the Fatal method) and apply
the same change consistently to other no-op logger implementations referenced
(e.g., those in core/mcp/init.go and core/providers/utils/utils.go) so all
implementations of schemas.Logger.Fatal terminate the process as documented.

In `@core/mcp/clientmanager.go`:
- Around line 552-568: When Start of the created externalClient (call to
externalClient.Start inside ExecuteWithRetry) fails or later initialization
fails, the code currently only cancels the context and returns, leaking
resources; update both error paths to close the created client: call
externalClient.Close(ctx) (or externalClient.Close() if that signature is used)
and log any Close error before returning, mirroring the cleanup pattern used
later (see the cleanup at the end of the function). Ensure this is applied for
the transport-start failure block (after ExecuteWithRetry around
externalClient.Start) and for the initialization failure path (around the
initialization step referenced near line 609), and keep the conditional cancel()
for STDIO/SSE connection types.

In `@core/mcp/utils.go`:
- Around line 237-283: The retry loop doubles the backoff before sleeping so
InitialBackoff is never used; change the logic in the loop that uses
backoff/nextBackoff (variables in this block) to sleep on the current backoff,
then after the sleep update backoff = min(backoff*2, config.MaxBackoff).
Specifically, remove computing nextBackoff := backoff * 2 before the time.After,
use time.After(backoff) for the wait (and log backoff), then set backoff =
nextBackoff (capped by config.MaxBackoff) for the next iteration; keep checks
for ctx.Done(), isTransientError, and attempt == config.MaxRetries as-is.

In `@docs/mcp/connecting-to-servers.mdx`:
- Around line 643-650: Docs list HTTP 504 as a transient error but the code's
transient list in isTransientError (in utils.go) only contains
"503","502","429","500"; update the code to match docs by adding "504" to the
explicit transient HTTP status set used by isTransientError, or alternatively
remove "504" from the docs—preferably add "504" to the
transientStatuses/explicit transient list inside isTransientError so 504 Gateway
Timeout is treated as transient alongside 500/502/503/429.
- Around line 630-637: The backoff table is inconsistent with the
implementation: remove "Attempt 7" because DefaultRetryConfig.MaxRetries = 5
yields only 6 attempts total (1 initial + 5 retries), and update the first wait
entry to match the actual behavior in ExecuteWithRetry (if the backoff bug is
unfixed set Attempt 2 wait to 2s; if you've patched ExecuteWithRetry to use
exponential base-2 backoff set Attempt 2 wait to 1s). Ensure the doc references
DefaultRetryConfig.MaxRetries and ExecuteWithRetry when you adjust the table so
future changes remain consistent.
🧹 Nitpick comments (2)
core/mcp/utils.go (1)

211-213: Defaulting to transient (retryable) for unrecognized errors is aggressive.

Any error not explicitly classified — including configuration / programming bugs (e.g. "unknown connection type" from the transport switch's default branch) — will be retried 5 times with long backoffs. A safer default is return false, relying on the explicit transient list to opt errors in to retries.

Proposed change
-	// Default: treat as transient to be safe (connection-related errors)
-	// This ensures we retry unknown errors that are likely transient
-	return true
+	// Default: treat as permanent to avoid retrying non-transient errors
+	// (e.g., programming bugs, unknown config errors)
+	return false
core/mcp/clientmanager.go (1)

554-562: Shared timeout context limits effective retry count.

For HTTP connections, ctx at line 558 has a single 60s timeout created at line 548. Each retry calls externalClient.Start(ctx) with this same context, so the total time across all retry attempts is capped at 60s — not per attempt. With backoff waits summing to 60s alone, only ~2 retries are realistically possible before context.DeadlineExceeded fires (which isTransientError classifies as permanent, stopping retries).

Similarly, initCtx at line 603 for SSE/STDIO has a single 60s timeout.

If the intent is to allow each attempt its own time budget, create a per-attempt timeout context inside the retry closure.

Also applies to: 598-608

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@core/mcp/clientmanager.go`:
- Around line 604-626: The initCtx deadline is created once and reused across
ExecuteWithRetry attempts so slow first attempts can exhaust the deadline and
prevent further retries; move creation of initCtx (and its cancel) inside the
retry closure passed to ExecuteWithRetry so each attempt gets a fresh
per-attempt timeout (e.g., recreate initCtx with the 60s deadline inside the
anonymous func used by ExecuteWithRetry), ensure you call cancel() after each
attempt to avoid leaks, and keep existing cleanup logic (externalClient.Close()
and logging) unchanged; this preserves expected retry behavior and works with
isTransientError/DefaultRetryConfig semantics.
- Around line 59-72: ReconnectClient currently wraps connectToMCPClient in an
outer ExecuteWithRetry using DefaultRetryConfig, causing nested/compounding
retries since connectToMCPClient already performs its own ExecuteWithRetry
calls; remove the outer ExecuteWithRetry and call m.connectToMCPClient(config)
directly from ReconnectClient (mirroring AddClient), then handle and return any
error (e.g., fmt.Errorf with id and err) so retries are only the internal
per-step retries managed by connectToMCPClient.
- Around line 552-574: The current retry wraps externalClient.Start() (and
similarly externalClient.Initialize()) with ExecuteWithRetry which repeatedly
calls the one-time lifecycle methods on the same *client instance, causing
resource leaks; change the logic so each retry attempt uses a fresh client: on
Start/Initialize error close the failing externalClient (if non-nil), recreate a
new client instance (call the existing NewClient / client construction code),
and then call Start()/Initialize() once on that new instance inside the retry
loop (or alternatively move the retry loop above NewClient so callers create a
new client per attempt); ensure you still cancel the long-lived context
(cancel()) for SSE/STDIO failures and preserve the existing cleanup logging
(m.logger.Warn) when closing clients.
🧹 Nitpick comments (1)
core/mcp/utils.go (1)

141-214: Substring-based error classification is fragile — bare HTTP status codes can false-match.

The permanent/transient lists use short substrings like "500", "400", "503", "429", "not found". These can inadvertently match port numbers, request IDs, or other numeric fragments in error messages (e.g., "connection to port 5003 failed" matches "500", and "max 4004 connections" matches "400").

Additionally, the default: return true on line 213 means every error not matched by either list is treated as transient. This causes clearly permanent errors (e.g., "unknown connection type" returned from connectToMCPClient) to be retried needlessly.

Consider using word-boundary-aware matching or checking for structured error types/codes before falling back to string matching. If string matching is kept, padding status codes (e.g., "HTTP 500", "status 400") would reduce false positives.

@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 02-06-feat_added_retries_for_mcp_connection branch from e8f4299 to 0b694e7 Compare February 6, 2026 13:53
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@core/logger.go`:
- Around line 124-138: There are three duplicated private noop logger
implementations that should be removed and replaced with the public NoOpLogger;
update the other modules to use NewNoOpLogger() instead of their private
types/constructors, delete the duplicate noopLogger type definitions and methods
(the private noop implementations), and ensure any places constructing or
referencing those private noop loggers now call NewNoOpLogger() and use the
schemas.Logger interface; keep the NoOpLogger methods/signature as-is so callers
don’t need changes beyond replacing constructor calls.

In `@core/mcp/clientmanager.go`:
- Around line 512-547: The retry loop currently calls externalClient.Start(ctx)
using the longLivedCtx (longLivedCtx) with no per-attempt deadline; update the
ExecuteWithRetry closure so each attempt derives a per-attempt timeout context
from longLivedCtx (e.g., via context.WithTimeout), use that perAttemptCtx when
calling externalClient.Start, ensure you call the cancel() after Start returns
and still close the previous externalClient on retry failures; keep the rest of
the retry logic (transportRetryConfig,
m.createHTTPConnection/m.createSTDIOConnection/m.createSSEConnection/m.createInProcessConnection
and externalClient.Close) the same so each attempt uses a fresh client and times
out reliably.

In `@docs/mcp/connecting-to-servers.mdx`:
- Around line 629-635: The documentation's "Backoff Progression" no longer
matches the corrected behavior in ExecuteWithRetry with DefaultRetryConfig;
update the list/table to show that ExecuteWithRetry sleeps the current backoff
starting from InitialBackoff = 1s before doubling, producing waits: Attempt 1
(no wait), Attempt 2: 1s, Attempt 3: 2s, Attempt 4: 4s, Attempt 5: 8s, Attempt
6: 16s. Replace the old 2s→4s→8s→16s→30s sequence with this corrected
progression and mention InitialBackoff and DefaultRetryConfig as the source of
the values.

In `@plugins/maxim/go.mod`:
- Around line 11-14: Remove the unused direct dependency
github.com/bytedance/sonic v1.14.2 from the require block in
plugins/maxim/go.mod (leave github.com/google/uuid v1.6.0 intact), verify there
are no imports of "github.com/bytedance/sonic" in the code (e.g., search the
plugins/maxim package and files like main.go), then run go mod tidy to let Go
resolve transitive deps and update go.sum accordingly.
🧹 Nitpick comments (2)
core/mcp/utils.go (2)

141-214: Default-to-transient (line 213) is a bold choice — verify it's intentional.

isTransientError returns true for any error that doesn't match the explicit permanent or transient lists. This means any unknown/unexpected error will be retried up to 5 times before failing. While the comment says this is intentional for "connection-related errors", it also means that, e.g., a server returning an unexpected error string will be retried.

If the intent is safety-biased (retry when unsure), this is fine but worth calling out: a miscategorized permanent error will cause unnecessary retries and delay the failure response by up to ~31 seconds (1+2+4+8+16s).

Also, consider whether the string-matching approach is fragile against errors from different locales, wrapped errors, or upstream library changes. Using typed errors (e.g., errors.As for specific HTTP status codes) where possible would be more robust — though this may depend on what mcp-go exposes.


228-280: Backoff is correct after the fix — minor nit on duration multiplication.

The backoff logic is now correct: sleep on current backoff, then double.

Line 273 uses time.Duration(float64(backoff) * 2) — this works but backoff * 2 is idiomatic Go since time.Duration supports integer multiplication and avoids the float64 round-trip.

Suggested simplification
-		backoff = time.Duration(float64(backoff) * 2)
+		backoff *= 2

@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 02-06-feat_added_retries_for_mcp_connection branch 2 times, most recently from c09e4e1 to a1b748d Compare February 6, 2026 14:25
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 02-06-feat_added_retries_for_mcp_connection branch 2 times, most recently from b287a86 to ce2d462 Compare February 9, 2026 09:53
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@transports/bifrost-http/server/server.go`:
- Around line 671-685: The code causes duplicate model entries on repeated
ForceReloadPricing because ModelCatalog.AddModelDataToPool appends without
deduping and the "models added to catalog" log fires even when listing fails;
modify the reload path (after s.Config.ModelCatalog.ForceReloadPricing and
s.Client.ListAllModels) to only call AddModelDataToPool when ListAllModels
succeeds and either clear or replace the existing pool before adding (e.g., call
a clear/ReplaceModelPool method on ModelCatalog or implement dedupe in
AddModelDataToPool), and move the logger.Info("models added to catalog") inside
the successful branch so it only logs when models were actually added; reference
symbols: ForceReloadPricing, ListAllModels, AddModelDataToPool,
populateModelPoolFromPricingData.
🧹 Nitpick comments (1)
core/mcp/clientmanager.go (1)

587-624: Consider per-attempt timeout for Initialize retry.

Unlike the transport Start() retry (which correctly creates perAttemptCtx inside the closure at lines 544-549), the Initialize() retry shares a single initCtx across all attempts (created at line 594). If an early attempt consumes most of the 60s timeout, subsequent attempts will have insufficient time and fail with context.DeadlineExceeded, which isTransientError treats as permanent (non-retryable).

This is a minor concern since initialization is typically fast, but for consistency with the transport retry pattern, consider moving initCtx creation inside the retry closure.

♻️ Suggested fix for consistency
-	// For STDIO/SSE: Use a timeout context for initialization to prevent indefinite hangs
-	// The subprocess will continue running with the long-lived context
-	var initCtx context.Context
-	var initCancel context.CancelFunc
-
-	if config.ConnectionType == schemas.MCPConnectionTypeSSE || config.ConnectionType == schemas.MCPConnectionTypeSTDIO {
-		// Create timeout context for initialization phase only
-		initCtx, initCancel = context.WithTimeout(longLivedCtx, MCPClientConnectionEstablishTimeout)
-		defer initCancel()
-		m.logger.Debug("%s [%s] Initializing client with %v timeout...", MCPLogPrefix, config.Name, MCPClientConnectionEstablishTimeout)
-	} else {
-		// HTTP already has timeout
-		initCtx = ctx
-	}
-
 	// Initialize client with retry logic
 	initRetryConfig := DefaultRetryConfig
 	err = ExecuteWithRetry(
 		m.ctx,
 		func() error {
+			// Create per-attempt timeout context for initialization
+			var initCtx context.Context
+			var initCancel context.CancelFunc
+			if config.ConnectionType == schemas.MCPConnectionTypeSSE || config.ConnectionType == schemas.MCPConnectionTypeSTDIO {
+				initCtx, initCancel = context.WithTimeout(longLivedCtx, MCPClientConnectionEstablishTimeout)
+			} else {
+				initCtx, initCancel = context.WithTimeout(ctx, MCPClientConnectionEstablishTimeout)
+			}
+			defer initCancel()
 			_, initErr := externalClient.Initialize(initCtx, extInitRequest)
 			return initErr
 		},
 		initRetryConfig,
 		m.logger,
 	)

@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 02-06-feat_added_retries_for_mcp_connection branch from ce2d462 to 6d75121 Compare February 9, 2026 12:33
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
core/mcp/utils.go (1)

284-343: ⚠️ Potential issue | 🟠 Major

Avoid reverse-mapping tool names to original MCP names.

This function still builds a sanitized→original mapping and retains original names for execution; the MCP tool system expects sanitized names end-to-end, so reverse mapping risks inconsistencies with CallToolParams.Name and ExtraFields.ToolName. Please drop the mapping and use sanitized names throughout. Based on learnings: “In MCP tool system under core/mcp/, tool names are sanitized by replacing '-' with '_' during discovery and this sanitized form is used throughout the system, including CallToolParams.Name and ExtraFields.ToolName. Do not reverse-sanitize or maintain mappings back to original names; rely on sanitized names in all MCP server calls and UI representations.”

🧹 Nitpick comments (1)
core/mcp/clientmanager.go (1)

589-624: Minor: initCtx timeout is shared across all retry attempts.

The initCtx created at line 594 has a single 60s deadline shared across all ExecuteWithRetry attempts. If early attempts consume significant time, later attempts will have reduced timeout windows.

This is likely acceptable since Initialize() is typically fast (protocol negotiation), but it differs from the per-attempt timeout pattern used for Start(). Consider creating initCtx inside the retry closure if consistent per-attempt behavior is desired.

Copy link
Collaborator Author

Pratham-Mishra04 commented Feb 9, 2026

Merge activity

  • Feb 9, 4:40 PM UTC: A user started a stack merge that includes this pull request via Graphite.
  • Feb 9, 5:11 PM UTC: Graphite couldn't merge this PR because it failed for an unknown reason (GitHub threw an unexpected error that did not resolve after multiple retries. Please try again later or contact Graphite support if this continues.).
  • Feb 9, 5:12 PM UTC: A user started a stack merge that includes this pull request via Graphite.
  • Feb 9, 5:44 PM UTC: A user started a stack merge that includes this pull request via Graphite.
  • Feb 9, 5:45 PM UTC: Graphite couldn't merge this PR because it had merge conflicts.
  • Feb 10, 9:14 AM UTC: A user started a stack merge that includes this pull request via Graphite.
  • Feb 10, 9:14 AM UTC: Graphite couldn't merge this PR because it had merge conflicts.
  • Feb 10, 9:22 AM UTC: A user started a stack merge that includes this pull request via Graphite.
  • Feb 10, 9:23 AM UTC: @Pratham-Mishra04 merged this pull request with Graphite.

@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 02-06-feat_added_retries_for_mcp_connection branch from d21585b to 871d512 Compare February 9, 2026 19:31
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 02-06-feat_added_retries_for_mcp_connection branch from 871d512 to fda220f Compare February 10, 2026 09:22
@Pratham-Mishra04 Pratham-Mishra04 merged commit 21e1a1e into main Feb 10, 2026
9 checks passed
@Pratham-Mishra04 Pratham-Mishra04 deleted the 02-06-feat_added_retries_for_mcp_connection branch February 10, 2026 09:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant