Skip to content

Commit 93bea2d

Browse files
Cirilla-zmhaabmass
andauthored
opentelemetry-util-genai: add support for emitting inference events and enrich message types (#3994)
* Add support for emitting inference events and enrich message types Change-Id: I8fd0b896fc103a986f78c7351ce627611e545a62 Co-developed-by: Cursor <noreply@cursor.com> * Add change log Change-Id: I5c4c93613e3e1084245b7298955a08cbc7c9708d Co-developed-by: Cursor <noreply@cursor.com> * Fix unit tests Change-Id: If34cfce0e7eb130db6a1e8e30a5f4be7c215285f Co-developed-by: Cursor <noreply@cursor.com> * Fix linting failure Change-Id: I847f75259e01729db88129a44b241afb0ea2aca4 Co-developed-by: Cursor <noreply@cursor.com> * Fix readme Change-Id: I818a042d275d3c8e3348647d73e34560e7d92f54 Co-developed-by: Cursor <noreply@cursor.com> * Format codes Change-Id: I40b8e01bbe4fa9c182e99085a7c71d4536042247 Co-developed-by: Cursor <noreply@cursor.com> * Fix missing trace context in events Change-Id: Ie07c495002143fb2f0cf88033206290eb85386ad Co-developed-by: Cursor <noreply@cursor.com> * feedback Change-Id: Ida0c2305d950d978c31eb04a80e21e947fabdfba Co-developed-by: Cursor <noreply@cursor.com> * fix type check Change-Id: I1da48b52b76042a9efd124057681f579cc93fb6e Co-developed-by: Cursor <noreply@cursor.com> * Fix the span name of LLM invocations Change-Id: I1b9f40e5576e699b1f61fa3d7e7790ee4b1448a5 Co-developed-by: Cursor <noreply@cursor.com> * Fix span name Change-Id: I7adf6fc9cf0bcbed3927862ff87d441bb2a1f78b Co-developed-by: Cursor <noreply@cursor.com> * Fix operation name of llm span Change-Id: I4a6f48d0f66b8ad00a6ce4be8dcf4a46aba68e33 Co-developed-by: Cursor <noreply@cursor.com> * Adjust event emission switches to optimize the integration experience Change-Id: Ied7182fe2f3493a1eba96d7eaae5fd1b790c16e7 Co-developed-by: Cursor <noreply@cursor.com> * Resolve conflicts Change-Id: I95472013fe59b2b7a7448bb4374fe103021ec8a1 Co-developed-by: Cursor <noreply@cursor.com> * Fix lint error Change-Id: I78008c9b812d81a705eb7e7c75bd622b47d9da5a Co-developed-by: Cursor <noreply@cursor.com> * Refactor span_utils.py to streamline attribute collection Change-Id: I243cc757454c085818d621011435076bf501108d Co-developed-by: Cursor <noreply@cursor.com> * Fix the lint error Change-Id: If8bfaf1fde5594f7918c195243db9ec83f156156 Co-developed-by: Cursor <noreply@cursor.com> --------- Co-authored-by: Aaron Abbott <aaronabbott@google.com>
1 parent b8a8020 commit 93bea2d

File tree

10 files changed

+969
-118
lines changed

10 files changed

+969
-118
lines changed

util/opentelemetry-util-genai/CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## Unreleased
99

10+
- Add support for emitting inference events and enrich message types. ([#3994](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/3994))
1011
- Add support for `server.address`, `server.port` on all signals and additional metric-only attributes
1112
([#4069](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/4069))
1213
- Log error when `fsspec` fails to be imported instead of silently failing ([#4037](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/4037)).

util/opentelemetry-util-genai/README.rst

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,18 @@ while providing standardization for generating both types of otel, "spans and me
99
This package relies on environment variables to configure capturing of message content.
1010
By default, message content will not be captured.
1111
Set the environment variable `OTEL_SEMCONV_STABILITY_OPT_IN` to `gen_ai_latest_experimental` to enable experimental features.
12-
And set the environment variable `OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT` to `SPAN_ONLY` or `SPAN_AND_EVENT` to capture message content in spans.
12+
Set the environment variable `OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT` to one of:
13+
- `NO_CONTENT`: Do not capture message content (default).
14+
- `SPAN_ONLY`: Capture message content in spans only.
15+
- `EVENT_ONLY`: Capture message content in events only.
16+
- `SPAN_AND_EVENT`: Capture message content in both spans and events.
17+
18+
To control event emission, you can optionally set `OTEL_INSTRUMENTATION_GENAI_EMIT_EVENT` to `true` or `false` (case-insensitive).
19+
This variable controls whether to emit `gen_ai.client.inference.operation.details` events.
20+
If not explicitly set, the default value is automatically determined by `OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT`:
21+
- When `NO_CONTENT` or `SPAN_ONLY` is set: defaults to `false`
22+
- When `EVENT_ONLY` or `SPAN_AND_EVENT` is set: defaults to `true`
23+
If explicitly set, the user's value takes precedence over the default.
1324

1425
This package provides these span attributes:
1526

@@ -23,6 +34,13 @@ This package provides these span attributes:
2334
- `gen_ai.usage.output_tokens`: Int(7)
2435
- `gen_ai.input.messages`: Str('[{"role": "Human", "parts": [{"content": "hello world", "type": "text"}]}]')
2536
- `gen_ai.output.messages`: Str('[{"role": "AI", "parts": [{"content": "hello back", "type": "text"}], "finish_reason": "stop"}]')
37+
- `gen_ai.system_instructions`: Str('[{"content": "You are a helpful assistant.", "type": "text"}]') (when system instruction is provided)
38+
39+
When `EVENT_ONLY` or `SPAN_AND_EVENT` mode is enabled and a LoggerProvider is configured,
40+
the package also emits `gen_ai.client.inference.operation.details` events with structured
41+
message content (as dictionaries instead of JSON strings). Note that when using `EVENT_ONLY`
42+
or `SPAN_AND_EVENT`, the `OTEL_INSTRUMENTATION_GENAI_EMIT_EVENT` environment variable defaults
43+
to `true`, so events will be emitted automatically unless explicitly set to `false`.
2644

2745

2846
Installation

util/opentelemetry-util-genai/src/opentelemetry/util/genai/_upload/completion_hook.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -174,7 +174,7 @@ def _calculate_ref_path(
174174
if is_system_instructions_hashable(system_instruction):
175175
# Get a hash of the text.
176176
system_instruction_hash = hashlib.sha256(
177-
"\n".join(x.content for x in system_instruction).encode( # pyright: ignore[reportUnknownMemberType, reportAttributeAccessIssue, reportUnknownArgumentType]
177+
"\n".join(x.content for x in system_instruction).encode( # pyright: ignore[reportUnknownMemberType, reportAttributeAccessIssue, reportUnknownArgumentType, reportCallIssue, reportArgumentType]
178178
"utf-8"
179179
),
180180
usedforsecurity=False,

util/opentelemetry-util-genai/src/opentelemetry/util/genai/environment_variables.py

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,15 @@
1616
"OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT"
1717
)
1818

19+
OTEL_INSTRUMENTATION_GENAI_EMIT_EVENT = "OTEL_INSTRUMENTATION_GENAI_EMIT_EVENT"
20+
"""
21+
.. envvar:: OTEL_INSTRUMENTATION_GENAI_EMIT_EVENT
22+
23+
Controls whether to emit gen_ai.client.inference.operation.details events.
24+
Must be one of ``true`` or ``false`` (case-insensitive).
25+
Defaults to ``false``.
26+
"""
27+
1928
OTEL_INSTRUMENTATION_GENAI_COMPLETION_HOOK = (
2029
"OTEL_INSTRUMENTATION_GENAI_COMPLETION_HOOK"
2130
)

util/opentelemetry-util-genai/src/opentelemetry/util/genai/handler.py

Lines changed: 21 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -65,10 +65,11 @@
6565
from typing import Iterator
6666

6767
from opentelemetry import context as otel_context
68-
from opentelemetry.metrics import MeterProvider, get_meter
69-
from opentelemetry.semconv._incubating.attributes import (
70-
gen_ai_attributes as GenAI,
68+
from opentelemetry._logs import (
69+
LoggerProvider,
70+
get_logger,
7171
)
72+
from opentelemetry.metrics import MeterProvider, get_meter
7273
from opentelemetry.semconv.schemas import Schemas
7374
from opentelemetry.trace import (
7475
Span,
@@ -80,7 +81,8 @@
8081
from opentelemetry.util.genai.metrics import InvocationMetricsRecorder
8182
from opentelemetry.util.genai.span_utils import (
8283
_apply_error_attributes,
83-
_apply_finish_attributes,
84+
_apply_llm_finish_attributes,
85+
_maybe_emit_llm_event,
8486
)
8587
from opentelemetry.util.genai.types import Error, LLMInvocation
8688
from opentelemetry.util.genai.version import __version__
@@ -96,6 +98,7 @@ def __init__(
9698
self,
9799
tracer_provider: TracerProvider | None = None,
98100
meter_provider: MeterProvider | None = None,
101+
logger_provider: LoggerProvider | None = None,
99102
):
100103
self._tracer = get_tracer(
101104
__name__,
@@ -106,6 +109,12 @@ def __init__(
106109
self._metrics_recorder: InvocationMetricsRecorder | None = None
107110
meter = get_meter(__name__, meter_provider=meter_provider)
108111
self._metrics_recorder = InvocationMetricsRecorder(meter)
112+
self._logger = get_logger(
113+
__name__,
114+
__version__,
115+
logger_provider,
116+
schema_url=Schemas.V1_37_0.value,
117+
)
109118

110119
def _record_llm_metrics(
111120
self,
@@ -129,7 +138,7 @@ def start_llm(
129138
"""Start an LLM invocation and create a pending span entry."""
130139
# Create a span and attach it as current; keep the token to detach later
131140
span = self._tracer.start_span(
132-
name=f"{GenAI.GenAiOperationNameValues.CHAT.value} {invocation.request_model}",
141+
name=f"{invocation.operation_name} {invocation.request_model}",
133142
kind=SpanKind.CLIENT,
134143
)
135144
# Record a monotonic start timestamp (seconds) for duration
@@ -148,8 +157,9 @@ def stop_llm(self, invocation: LLMInvocation) -> LLMInvocation: # pylint: disab
148157
return invocation
149158

150159
span = invocation.span
151-
_apply_finish_attributes(span, invocation)
160+
_apply_llm_finish_attributes(span, invocation)
152161
self._record_llm_metrics(invocation, span)
162+
_maybe_emit_llm_event(self._logger, span, invocation)
153163
# Detach context and end span
154164
otel_context.detach(invocation.context_token)
155165
span.end()
@@ -164,10 +174,11 @@ def fail_llm( # pylint: disable=no-self-use
164174
return invocation
165175

166176
span = invocation.span
167-
_apply_finish_attributes(invocation.span, invocation)
168-
_apply_error_attributes(span, error)
177+
_apply_llm_finish_attributes(invocation.span, invocation)
178+
_apply_error_attributes(invocation.span, error)
169179
error_type = getattr(error.type, "__qualname__", None)
170180
self._record_llm_metrics(invocation, span, error_type=error_type)
181+
_maybe_emit_llm_event(self._logger, span, invocation, error)
171182
# Detach context and end span
172183
otel_context.detach(invocation.context_token)
173184
span.end()
@@ -201,6 +212,7 @@ def llm(
201212
def get_telemetry_handler(
202213
tracer_provider: TracerProvider | None = None,
203214
meter_provider: MeterProvider | None = None,
215+
logger_provider: LoggerProvider | None = None,
204216
) -> TelemetryHandler:
205217
"""
206218
Returns a singleton TelemetryHandler instance.
@@ -212,6 +224,7 @@ def get_telemetry_handler(
212224
handler = TelemetryHandler(
213225
tracer_provider=tracer_provider,
214226
meter_provider=meter_provider,
227+
logger_provider=logger_provider,
215228
)
216229
setattr(get_telemetry_handler, "_default_handler", handler)
217230
return handler

0 commit comments

Comments
 (0)