Skip to content

support openai compatible model reasoning content in streaming response#16

Merged
sjy3 merged 3 commits intovolcengine:mainfrom
UnderTreeTech:main
Jan 30, 2026
Merged

support openai compatible model reasoning content in streaming response#16
sjy3 merged 3 commits intovolcengine:mainfrom
UnderTreeTech:main

Conversation

@UnderTreeTech
Copy link
Contributor

@UnderTreeTech UnderTreeTech commented Jan 5, 2026

Solved problems:

  • Missing Reasoning in Stream: The generateStream function currently ignores the reasoning_content field in the delta. As a result, reasoning content is neither yielded during the stream nor included in the final aggregated response.
  • Inconsistent Behavior: Non-streaming calls properly map reasoning content to genai.Part with Thought: true, whereas streaming calls drop this information entirely.

Default filtering of thought content in openai compatible LLMs.

if EnableThought set to true, agent will output thought content to frontend to render.

Benefits

  • Flexibility: Can be enabled/disabled per agent by including/excluding the callback
  • Consistency: Provides a standard approach across different OpenAI-compatible providers
  • No Breaking Changes: Fully opt-in via the existing callback mechanism
  • Frontend/Backend Choice: Decision to filter can be made at either layer
  1. Backend filtering: Add the callback to filter thought content before sending to frontend
  2. Frontend filtering: Don't use the callback, let frontend decide whether to display thought content based on part.Thought flag

@sjy3
Copy link
Collaborator

sjy3 commented Jan 14, 2026

Adding a ThoughtFilterCallback doesn't seem like a good solution, as it appears to pertain to a front-end-related issue.

@sjy3 sjy3 merged commit 0c57259 into volcengine:main Jan 30, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants