Skip to content

fix faling tests

75918ab
Select commit
Loading
Failed to load commit list.
Open

[TT-6075] Create option to populate the X-ratelimit headers from rate limits rather than quotas #7730

fix faling tests
75918ab
Select commit
Loading
Failed to load commit list.
probelabs / Visor: performance succeeded Feb 19, 2026 in 46s

✅ Check Passed (Warnings Found)

performance check passed. Found 1 warning, but fail_if condition was not met.

Details

📊 Summary

  • Total Issues: 1
  • Warning Issues: 1

🔍 Failure Condition Results

Passed Conditions

  • global_fail_if: Condition passed

Issues by Category

Performance (1)

  • ⚠️ gateway/session_manager.go:188 - The limitSentinel function spawns a new goroutine for every request to update the rate limit counter in Redis. While this makes the initial check non-blocking, creating a goroutine per request can be inefficient under high load, potentially stressing the Go scheduler and increasing memory consumption. This pattern can lead to unbounded resource usage if the rate of incoming requests is higher than the rate at which Redis can process the updates.

Powered by Visor from Probelabs

💡 TIP: You can chat with Visor using /visor ask <your question>

Annotations

Check warning on line 190 in gateway/session_manager.go

See this annotation in the file changed.

@probelabs probelabs / Visor: performance

performance Issue

The `limitSentinel` function spawns a new goroutine for every request to update the rate limit counter in Redis. While this makes the initial check non-blocking, creating a goroutine per request can be inefficient under high load, potentially stressing the Go scheduler and increasing memory consumption. This pattern can lead to unbounded resource usage if the rate of incoming requests is higher than the rate at which Redis can process the updates.
Raw output
To improve resource management and ensure system stability under high-concurrency scenarios, consider using a worker pool pattern. A fixed number of worker goroutines could process rate limit updates from a buffered channel. This would cap the number of concurrent goroutines and Redis writes, preventing resource exhaustion and leading to more predictable performance.