Skip to content

feat: update LLM error handling with retry logic and enhance model validation messages#4

Open
movclantian wants to merge 1 commit intonomie7:mainfrom
movclantian:main
Open

feat: update LLM error handling with retry logic and enhance model validation messages#4
movclantian wants to merge 1 commit intonomie7:mainfrom
movclantian:main

Conversation

@movclantian
Copy link
Contributor

@movclantian movclantian commented Jan 11, 2026

This pull request introduces robust error handling and retry logic for LLM API calls, enhances validation and error messaging for model and API settings, and improves UI consistency for session editing. The main changes are grouped as follows:

Backend: Error Handling and Retry Logic

  • Added custom LLMError class and comprehensive error mapping for LLM API errors, including rate limit, authentication, connection, and server errors. This enables precise error reporting to the frontend.
  • Implemented a withRetry function that automatically retries API calls on retryable errors (e.g., rate limit, server errors) with exponential backoff, improving reliability of LLM requests.
  • Updated generateWithLLM and generateWithLLMStream to use the new retry and error handling mechanisms, ensuring consistent error propagation and resilience.

Frontend: Validation and Error Messaging

  • Expanded model identifier validation in SettingsDialog to support Chinese characters, brackets, and spaces, and updated the corresponding validation regex.
  • Enhanced error messages in both English and Chinese locales to provide user-friendly descriptions for various LLM API error scenarios (rate limit, authentication, connection, server, bad request, unknown errors).
  • Improved the "invalid model" error message to clarify supported characters and added a security note about API key storage.

UI/UX Improvements

  • Refactored the session editing UI in SessionSidebar to move action buttons (save/cancel) outside the input field, increase their size, and ensure better alignment and accessibility.
  • Minor UI tweaks in Index.tsx for button click handling, sticky sidebar offset, and footer spacing for improved layout consistency.

Copilot AI review requested due to automatic review settings January 11, 2026 07:12
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR enhances LLM error handling by implementing retry logic with exponential backoff for transient errors, updating model validation to support additional characters (Chinese, brackets, spaces), and improving UI/UX with better button positioning and edit mode layout in the session sidebar.

Changes:

  • Added comprehensive retry logic with exponential backoff for API errors (rate limits, connection errors, server errors)
  • Enhanced model identifier validation regex to support Chinese characters, brackets, and spaces, with updated validation messages
  • Improved UI positioning and layout for session editing buttons and sticky elements

Reviewed changes

Copilot reviewed 6 out of 7 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
src/pages/Index.tsx Fixed onClick handler to prevent event propagation to cancel function; adjusted sticky positioning and footer margin
src/i18n/locales/zh.json Added detailed model validation message, security note, and error message translations
src/i18n/locales/en.json Added detailed model validation message, security note, and error message translations
src/components/SettingsDialog.tsx Updated model validation regex to support Chinese characters, brackets, and spaces
src/components/SessionSidebar.tsx Restructured edit mode UI with absolute positioning for check/cancel buttons
server/src/services/llm.ts Implemented LLMError class, retry logic with exponential backoff, and comprehensive error handling
package-lock.json Version bump to 1.0.6 and peer dependency additions

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +229 to +230
"badRequestError": "请求无效。请检查输入参数。",
"unknownError": "发生未知错误。请重试。"
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new error translation keys (rateLimitError, authError, connectionError, serverError, badRequestError, unknownError) have been added to both en.json and zh.json but are not used anywhere in the codebase. The frontend simply displays the error.message from the server directly without mapping error codes to i18n keys. Consider either implementing error code to i18n key mapping in the frontend, or removing these unused translation entries.

Suggested change
"badRequestError": "请求无效。请检查输入参数。",
"unknownError": "发生未知错误。请重试。"
"badRequestError": "请求无效。请检查输入参数。"

Copilot uses AI. Check for mistakes.
Comment on lines +219 to +225
"loadSessionsDescription": "An error occurred while loading your sessions. Please try again.",
"rateLimitError": "API rate limit exceeded. Please wait a moment and try again.",
"authError": "Invalid API key. Please check your API key configuration.",
"connectionError": "Failed to connect to API server. Please check your network or API base URL.",
"serverError": "API server error. Please try again later.",
"badRequestError": "Invalid request. Please check your input parameters.",
"unknownError": "An unexpected error occurred. Please try again."
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new error translation keys (rateLimitError, authError, connectionError, serverError, badRequestError, unknownError) have been added but are not used anywhere in the codebase. The frontend simply displays the error.message from the server directly without mapping error codes to i18n keys. Consider either implementing error code to i18n key mapping in the frontend, or removing these unused translation entries.

Suggested change
"loadSessionsDescription": "An error occurred while loading your sessions. Please try again.",
"rateLimitError": "API rate limit exceeded. Please wait a moment and try again.",
"authError": "Invalid API key. Please check your API key configuration.",
"connectionError": "Failed to connect to API server. Please check your network or API base URL.",
"serverError": "API server error. Please try again later.",
"badRequestError": "Invalid request. Please check your input parameters.",
"unknownError": "An unexpected error occurred. Please try again."
"loadSessionsDescription": "An error occurred while loading your sessions. Please try again."

Copilot uses AI. Check for mistakes.
Comment on lines +187 to 194
try {
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) yield content;
}
} catch (error) {
handleAPIError(error);
}
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the retry logic, when an error occurs during the stream iteration (lines 187-194), the error is handled by handleAPIError which always throws. However, at this point, the stream has already been successfully created by withRetry. If the error occurs while consuming the stream chunks, it won't be retried. Consider whether stream consumption errors should also be retryable, or if the current behavior (only retry stream creation) is intentional.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments