-
Notifications
You must be signed in to change notification settings - Fork 46.3k
feat(copilot): Add detailed API error logging for debugging #11942
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughCentralized API error handling was added to the chat service: new helpers Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Chat as ChatService
participant Provider as Provider / OpenRouter
participant Logger as Logger
Client->>Chat: request chat / continuation
Chat->>Provider: chat.completions.create(..., tools?, tool_choice?)
alt Provider returns error
Provider--x Chat: exception / error
Chat->>Chat: _extract_api_error_details(error)
Chat->>Logger: _log_api_error(error, session_id, message_count, model, retry_count)
opt retryable
Chat->>Provider: retry (exponential backoff)
Provider-->>Chat: error or success
end
alt exhausted retries / final failure
Chat->>Logger: _log_api_error(final_error, session_id, message_count, model, retry_count)
Chat->>Client: return enriched error message / StreamError
else success
Chat->>Client: stream/return completion
end
else success
Provider-->>Chat: streamed chunks / completion
Chat->>Client: stream/return completion
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
0171762 to
9688d66
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@autogpt_platform/backend/backend/api/features/chat/service.py`:
- Around line 834-846: The code currently copies error.body into
details["response_body"] and later logs details; instead sanitize and limit that
data by extracting only a whitelist of safe fields from error.body (message,
type, code, param) and/or serializing a reduced object, then truncate the
resulting string to 2000 characters before assigning to
details["response_body"]; keep other header extraction (openrouter_provider,
openrouter_model, retry_after, rate_limit_remaining) unchanged; also remove the
inline comments around these blocks so no unnecessary comments remain.
🧹 Nitpick comments (1)
autogpt_platform/backend/backend/api/features/chat/service.py (1)
823-883: Remove redundant inline comments in the new helpers.The inline comments here restate obvious code paths and add noise. Consider removing them to align with the no‑comments guideline.
♻️ Suggested cleanup
- # Extract common attributes if hasattr(error, "code"): details["code"] = error.code if hasattr(error, "param"): details["param"] = error.param - # APIStatusError has rich response information if isinstance(error, APIStatusError): details["status_code"] = error.status_code details["request_id"] = getattr(error, "request_id", None) - # Extract response body (contains OpenRouter error details) if hasattr(error, "body") and error.body: details["response_body"] = error.body - # Extract useful headers if hasattr(error, "response") and error.response: headers = error.response.headers - # OpenRouter-specific headers details["openrouter_provider"] = headers.get("x-openrouter-provider") details["openrouter_model"] = headers.get("x-openrouter-model") - # Rate limit headers details["retry_after"] = headers.get("retry-after") details["rate_limit_remaining"] = headers.get("x-ratelimit-remaining")- # Add context details["session_id"] = session_id details["message_count"] = message_count details["model"] = model details["retry_count"] = retry_count - # Log at appropriate level if isinstance(error, RateLimitError): logger.warning(f"Rate limit error: {details}")As per coding guidelines, "Avoid comments at all times unless the code is very complex".
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
autogpt_platform/backend/backend/api/features/chat/service.py
🧰 Additional context used
📓 Path-based instructions (2)
autogpt_platform/backend/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/**/*.py: Always run backend setup commands in order: poetry install, poetry run prisma migrate dev, poetry run prisma generate before backend development
Always run poetry run format (Black + isort) before poetry run lint (ruff) for backend code
Use Python 3.10-3.13 with Python 3.11 required for development (managed by Poetry via pyproject.toml)
autogpt_platform/backend/**/*.py: Use FastAPI with async support for API endpoints in the backend
Use Prisma ORM for database operations with PostgreSQL
Use RabbitMQ for async task processing in the backend
Use JWT-based authentication with Supabase integration
Usepoetry run format(Black + isort) to format code andpoetry run lint(ruff) for linting in the backend
Use ClamAV integration for file upload security
autogpt_platform/backend/**/*.py: Format Python code withpoetry run format
Runpoetry run test(runs pytest with a docker based postgres + prisma) before committing backend changes
Files:
autogpt_platform/backend/backend/api/features/chat/service.py
autogpt_platform/**/*.{ts,tsx,js,py}
📄 CodeRabbit inference engine (AGENTS.md)
Avoid comments at all times unless the code is very complex
Files:
autogpt_platform/backend/backend/api/features/chat/service.py
🧬 Code graph analysis (1)
autogpt_platform/backend/backend/api/features/chat/service.py (2)
autogpt_platform/backend/backend/util/logging.py (2)
error(49-51)warning(45-47)autogpt_platform/frontend/src/components/contextual/Chat/stream-utils.ts (1)
MAX_RETRIES(78-78)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Seer Code Review
- GitHub Check: types
- GitHub Check: test (3.12)
- GitHub Check: test (3.13)
- GitHub Check: test (3.11)
- GitHub Check: Check PR Status
🔇 Additional comments (2)
autogpt_platform/backend/backend/api/features/chat/service.py (2)
1502-1566: LGTM — richer error context in streaming path.Good addition of
_log_api_errorand provider error extraction; this will materially improve diagnostics during retries and final failures.
1989-2027: LGTM — continuation path now logs richer API context.The new structured logging on retryable and terminal failures should make continuation issues much easier to triage.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
0367919 to
50ec204
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@autogpt_platform/backend/backend/api/features/chat/service.py`:
- Around line 1448-1457: Remove the inline explanatory comments in the error
extraction block inside service.py (the section using _extract_api_error_details
and the subsequent response_body check) so only the code remains; specifically
delete the comments like "Extract more specific error info for user message" and
"Use the provider's error message if available" while keeping the logic that
assigns error_details = _extract_api_error_details(e) and the conditional that
inspects error_details["response_body"] and extracts body["error"]["message"]
intact.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
autogpt_platform/backend/backend/api/features/chat/service.py
🧰 Additional context used
📓 Path-based instructions (2)
autogpt_platform/backend/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/**/*.py: Always run backend setup commands in order: poetry install, poetry run prisma migrate dev, poetry run prisma generate before backend development
Always run poetry run format (Black + isort) before poetry run lint (ruff) for backend code
Use Python 3.10-3.13 with Python 3.11 required for development (managed by Poetry via pyproject.toml)
autogpt_platform/backend/**/*.py: Use FastAPI with async support for API endpoints in the backend
Use Prisma ORM for database operations with PostgreSQL
Use RabbitMQ for async task processing in the backend
Use JWT-based authentication with Supabase integration
Usepoetry run format(Black + isort) to format code andpoetry run lint(ruff) for linting in the backend
Use ClamAV integration for file upload security
autogpt_platform/backend/**/*.py: Format Python code withpoetry run format
Runpoetry run test(runs pytest with a docker based postgres + prisma) before committing backend changes
Files:
autogpt_platform/backend/backend/api/features/chat/service.py
autogpt_platform/**/*.{ts,tsx,js,py}
📄 CodeRabbit inference engine (AGENTS.md)
Avoid comments at all times unless the code is very complex
Files:
autogpt_platform/backend/backend/api/features/chat/service.py
🧬 Code graph analysis (1)
autogpt_platform/backend/backend/api/features/chat/service.py (1)
autogpt_platform/backend/backend/util/logging.py (2)
error(49-51)warning(45-47)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: types
- GitHub Check: Seer Code Review
- GitHub Check: test (3.11)
- GitHub Check: test (3.13)
- GitHub Check: test (3.12)
- GitHub Check: Check PR Status
- GitHub Check: conflicts
🔇 Additional comments (2)
autogpt_platform/backend/backend/api/features/chat/service.py (2)
1474-1480: LGTM on logging after retry exhaustion.Clear and consistent final failure logging with context.
1914-1931: LGTM on centralized logging in LLM continuation failures.The retry and terminal failure paths now consistently emit enriched diagnostics.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
50ec204 to
ccc671c
Compare
Adds comprehensive error logging for OpenRouter/OpenAI API errors to help diagnose issues like provider routing failures, context length exceeded, rate limits, etc. Changes: - Add _extract_api_error_details() to extract rich info from API errors including status code, response body, OpenRouter headers, etc. - Add _log_api_error() helper to log errors with context (session ID, message count, model, retry count) - Update error handling in _stream_chat_chunks() to use new logging - Update error handling in _generate_llm_continuation() to use new logging - Extract provider's error message from response body for better user feedback This helps debug issues like SECRT-1859 where OpenRouter returns 'provider returned error' with provider_name='unknown' without capturing the actual error details. Refs: SECRT-1859
ccc671c to
efd1e96
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@autogpt_platform/backend/backend/api/features/chat/service.py`:
- Around line 1448-1454: The code in the chat service that extracts API error
text uses body.get("error", {}).get("message") without ensuring body["error"] is
a dict; update the logic in the block after calling
_extract_api_error_details(e) so it first checks that body is a dict and that
isinstance(body.get("error"), dict) before accessing .get("message"), and fall
back to safe alternatives (e.g., treat non-dict error as a string or use
str(body.get("error")) or a generic message) when setting error_text in the
method where error_details and body are handled.
🧹 Nitpick comments (1)
autogpt_platform/backend/backend/api/features/chat/service.py (1)
1912-1912: Remove inline comment per coding guidelines.The comment "Non-retryable error - log details and exit gracefully" describes obvious control flow and should be removed. As per coding guidelines, "Avoid comments at all times unless the code is very complex."
♻️ Proposed fix
else: - # Non-retryable error - log details and exit gracefully _log_api_error(
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
autogpt_platform/backend/backend/api/features/chat/service.py
🧰 Additional context used
📓 Path-based instructions (2)
autogpt_platform/backend/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/**/*.py: Always run backend setup commands in order: poetry install, poetry run prisma migrate dev, poetry run prisma generate before backend development
Always run poetry run format (Black + isort) before poetry run lint (ruff) for backend code
Use Python 3.10-3.13 with Python 3.11 required for development (managed by Poetry via pyproject.toml)
autogpt_platform/backend/**/*.py: Use FastAPI with async support for API endpoints in the backend
Use Prisma ORM for database operations with PostgreSQL
Use RabbitMQ for async task processing in the backend
Use JWT-based authentication with Supabase integration
Usepoetry run format(Black + isort) to format code andpoetry run lint(ruff) for linting in the backend
Use ClamAV integration for file upload security
autogpt_platform/backend/**/*.py: Format Python code withpoetry run format
Runpoetry run test(runs pytest with a docker based postgres + prisma) before committing backend changes
Files:
autogpt_platform/backend/backend/api/features/chat/service.py
autogpt_platform/**/*.{ts,tsx,js,py}
📄 CodeRabbit inference engine (AGENTS.md)
Avoid comments at all times unless the code is very complex
Files:
autogpt_platform/backend/backend/api/features/chat/service.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Seer Code Review
- GitHub Check: types
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.13)
- GitHub Check: Check PR Status
🔇 Additional comments (2)
autogpt_platform/backend/backend/api/features/chat/service.py (2)
1968-1990: LGTM!The
_log_api_error()helper correctly categorizes errors by severity and logs at appropriate levels. The structured logging with context metadata (session_id, message_count, model, retry_count) will significantly improve debugging capabilities.
2022-2040: LGTM!The
_sanitize_error_body()function properly addresses the security concern by whitelisting only safe fields (message, type, code, param, error), recursively handling nested error structures, and truncating long values. This prevents accidental leakage of sensitive metadata in logs.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
|
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
|
@Otto-AGPT CI failes poetry run format fail, lot's of pr comment you need to fix |
majdyz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CI fail, linting fail, lots of pr comment unaddressed @Otto-AGPT pls fix
Resolved conflict in service.py by keeping both: - Error logging functions (_log_api_error, _extract_api_error_details, _sanitize_error_body) - Streaming continuation function (_generate_llm_continuation_with_streaming)
Addresses review feedback: body.get('error', {}).get('message') is unsafe
when body['error'] is None or not a dict. Now properly checks isinstance()
before accessing nested fields, and falls back to top-level message field.
|
✅ Addressed review feedback:
@majdyz ready for re-review 🙏 |
|
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@autogpt_platform/backend/backend/api/features/chat/service.py`:
- Around line 1846-1849: The _sanitize_error_body function currently drops
non-dict bodies; change it to preserve diagnostic info by returning a dict for
strings/other types: if body is not a dict, convert it to a string, truncate to
max_length, and return something like {"body": "<truncated string>"} (or
{"type": "<type-name>", "body": "<truncated string>"} for clarity); keep
existing behavior for dicts but ensure any string values are truncated to
max_length and sensitive keys are filtered as before. Ensure the function
signature and return type remain dict[str, Any] | None and that the truncation
uses the provided max_length.
🧹 Nitpick comments (1)
autogpt_platform/backend/backend/api/features/chat/service.py (1)
1109-1127:_extract_api_error_detailsis called twice for the same error.
_log_api_error(line 1109) internally calls_extract_api_error_details, then line 1119 calls it again. Consider having_log_api_errorreturn the extracted details so callers can reuse them, avoiding the redundant extraction.♻️ Suggested refactor
-def _log_api_error( +def _log_api_error( error: Exception, session_id: str | None = None, message_count: int | None = None, model: str | None = None, retry_count: int = 0, -) -> None: +) -> dict[str, Any]: - """Log detailed API error information for debugging.""" + """Log detailed API error information for debugging. Returns extracted details.""" details = _extract_api_error_details(error) details["session_id"] = session_id details["message_count"] = message_count details["model"] = model details["retry_count"] = retry_count if isinstance(error, RateLimitError): logger.warning(f"Rate limit error: {details}") elif isinstance(error, APIConnectionError): logger.warning(f"API connection error: {details}") elif isinstance(error, APIStatusError) and error.status_code >= 500: logger.error(f"API server error (5xx): {details}") else: logger.error(f"API error: {details}") + + return detailsThen at the call site:
- _log_api_error( + error_details = _log_api_error( error=e, session_id=session.session_id if session else None, message_count=len(messages) if messages else None, model=model, retry_count=retry_count, ) error_code = None error_text = str(e) - error_details = _extract_api_error_details(e) if error_details.get("response_body"):
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
autogpt_platform/backend/backend/api/features/chat/service.pyautogpt_platform/backend/backend/data/execution_queue_test.py
💤 Files with no reviewable changes (1)
- autogpt_platform/backend/backend/data/execution_queue_test.py
🧰 Additional context used
📓 Path-based instructions (6)
autogpt_platform/backend/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development
Files:
autogpt_platform/backend/backend/api/features/chat/service.py
autogpt_platform/backend/backend/api/features/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Update routes in '/backend/backend/api/features/' and add/update Pydantic models in the same directory for API development
When modifying API routes, update corresponding Pydantic models in the same directory and write tests alongside the route file
Files:
autogpt_platform/backend/backend/api/features/chat/service.py
autogpt_platform/backend/**/*.{py,txt}
📄 CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
Use
poetry runprefix for all Python commands, including testing, linting, formatting, and migrations
Files:
autogpt_platform/backend/backend/api/features/chat/service.py
autogpt_platform/backend/backend/api/**/*.py
📄 CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
autogpt_platform/backend/backend/api/**/*.py: Use FastAPI for building REST and WebSocket endpoints
Use JWT-based authentication with Supabase integration
Files:
autogpt_platform/backend/backend/api/features/chat/service.py
autogpt_platform/backend/backend/**/*.py
📄 CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
Use Prisma ORM for database operations in PostgreSQL with pgvector for embeddings
Files:
autogpt_platform/backend/backend/api/features/chat/service.py
autogpt_platform/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Format Python code with
poetry run format
Files:
autogpt_platform/backend/backend/api/features/chat/service.py
🧠 Learnings (1)
📚 Learning: 2026-01-19T07:20:23.494Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 11795
File: autogpt_platform/backend/backend/api/features/chat/tools/utils.py:92-111
Timestamp: 2026-01-19T07:20:23.494Z
Learning: In autogpt_platform/backend/backend/api/features/chat/tools/utils.py, the _serialize_missing_credential function uses next(iter(field_info.provider)) for provider selection. The PR author confirmed this non-deterministic provider selection is acceptable because the function returns both "type" (single, for backward compatibility) and "types" (full array), which achieves the primary goal of deterministic credential type presentation.
Applied to files:
autogpt_platform/backend/backend/api/features/chat/service.py
🧬 Code graph analysis (1)
autogpt_platform/backend/backend/api/features/chat/service.py (2)
autogpt_platform/backend/backend/util/logging.py (2)
error(49-51)warning(45-47)autogpt_platform/frontend/src/components/contextual/Chat/stream-utils.ts (1)
MAX_RETRIES(83-83)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Seer Code Review
- GitHub Check: types
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.13)
- GitHub Check: Check PR Status
🔇 Additional comments (4)
autogpt_platform/backend/backend/api/features/chat/service.py (4)
1145-1154: LGTM!Exhausted-retries path correctly logs with
_log_api_errorand yields appropriate error/finish events.
1736-1753: LGTM!Both the non-retryable and exhausted-retries paths in
_generate_llm_continuationcorrectly use_log_api_errorwith appropriate context. Since this is a background task with no user-facing error stream, there's no need for the additional error-text extraction.
1792-1813: LGTM!Good log-level differentiation: rate-limit and connection errors logged at
warning(transient), server errors aterror. The function signature is clear and well-typed.
1829-1841: The review comment is incorrect. According to the official OpenAI Python SDK documentation,APIStatusErrorexplicitly exposesrequest_idas a direct attribute. The SDK's README shows the example:exc.request_idfor accessing request IDs from exceptions.The current implementation using
getattr(error, "request_id", None)at line 1831 is appropriate and safe—it correctly retrieves the request ID from the error object. No changes are needed.Likely an incorrect or invalid review comment.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| def _sanitize_error_body(body: Any, max_length: int = 2000) -> dict[str, Any] | None: | ||
| """Extract only safe fields from error response body to avoid logging sensitive data.""" | ||
| if not isinstance(body, dict): | ||
| return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-dict error bodies are silently discarded, losing diagnostic info.
When error.body is a string (e.g., an HTML error page from a proxy or CDN), the function returns None and the body is lost from logs. Consider returning a truncated string representation for non-dict bodies.
🔧 Proposed fix
def _sanitize_error_body(body: Any, max_length: int = 2000) -> dict[str, Any] | None:
"""Extract only safe fields from error response body to avoid logging sensitive data."""
if not isinstance(body, dict):
- return None
+ if body is not None:
+ text = str(body)
+ return {"raw": text[:max_length] + "...[truncated]" if len(text) > max_length else text}
+ return None🤖 Prompt for AI Agents
In `@autogpt_platform/backend/backend/api/features/chat/service.py` around lines
1846 - 1849, The _sanitize_error_body function currently drops non-dict bodies;
change it to preserve diagnostic info by returning a dict for strings/other
types: if body is not a dict, convert it to a string, truncate to max_length,
and return something like {"body": "<truncated string>"} (or {"type":
"<type-name>", "body": "<truncated string>"} for clarity); keep existing
behavior for dicts but ensure any string values are truncated to max_length and
sensitive keys are filtered as before. Ensure the function signature and return
type remain dict[str, Any] | None and that the truncation uses the provided
max_length.
Fixes pyright errors: - Cannot access attribute 'code' for class 'Exception' - Cannot access attribute 'param' for class 'Exception' Using getattr() instead of direct attribute access satisfies pyright's type checker while maintaining the same runtime behavior.
Addresses CodeRabbit feedback: non-dict error bodies (e.g., HTML error pages from proxies) were silently discarded, losing diagnostic info. Now returns truncated string representation instead of None.
|
Additional fixes pushed:
All lint + type checks should pass now. 🤞 |
Swiftyos
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, this will be useful
| import queue | ||
| import threading | ||
|
|
||
| import pytest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we shouldn't be modifying files not related to the PR´s objective
Summary
Adds comprehensive error logging for OpenRouter/OpenAI API errors to help diagnose issues like provider routing failures, context length exceeded, rate limits, etc.
Background
While investigating SECRT-1859, we found that when OpenRouter returns errors, the actual error details weren't being captured or logged. Langfuse traces showed
provider_name: 'unknown'andcompletion: nullwithout any insight into WHY all providers rejected the request.Changes
_extract_api_error_details()to extract rich information from API errors including:_log_api_error()helper that logs errors with context:_stream_chat_chunks()and_generate_llm_continuation()to use new loggingExample log output
Testing
Refs