Skip to content

fix(openai): capture token usage for Responses API streaming#1680

Draft
wingding12 wants to merge 2 commits intopydantic:mainfrom
wingding12:fix-openai-responses-stream-token-usage-1651
Draft

fix(openai): capture token usage for Responses API streaming#1680
wingding12 wants to merge 2 commits intopydantic:mainfrom
wingding12:fix-openai-responses-stream-token-usage-1651

Conversation

@wingding12
Copy link

Closes #1651

Summary

When using the OpenAI Responses API with streaming (client.responses.stream()), logfire did not capture token usage attributes (gen_ai.usage.input_tokens, gen_ai.usage.output_tokens) on the span. Non-streaming calls worked correctly.

Change

OpenaiResponsesStreamState.get_attributes() now extracts usage from the response and sets INPUT_TOKENS and OUTPUT_TOKENS on the span, matching the non-streaming path and the pattern used in on_response() (lines 298–304).

…c#1651)

OpenaiResponsesStreamState.get_attributes() did not extract usage
(input_tokens, output_tokens) from the response, so gen_ai.usage.*
span attributes were missing for streaming Responses API calls.
Add the same usage extraction used in non-streaming paths.
@codecov
Copy link

codecov bot commented Jan 31, 2026

Codecov Report

❌ Patch coverage is 71.42857% with 2 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
...ire/_internal/integrations/llm_providers/openai.py 71.42% 0 Missing and 2 partials ⚠️

📢 Thoughts on this report? Let us know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Streaming token usage not captured for OpenAI Responses API

1 participant