Customize Tracing
Using context to customize spans
In order to customize spans that are created via auto-instrumentation, The Otel Context can be used to set span attributes created during a block of code (think child spans or spans under that block of code). Our openinference
packages offer convenient tools to write and read from the OTel Context. The benefit of this approach is that OpenInference auto instrumentors will pass (e.g. inherit) these attributes to all spans underneath a parent trace.
Supported Context Attributes include:
Session ID* Unique identifier for a session
User ID* Unique identifier for a user.
Metadata Metadata associated with a span.
Tags* List of tags to give the span a category.
Prompt Template
Template Used to generate prompts as Python f-strings.
Version The version of the prompt template.
Variables key-value pairs applied to the prompt template.
*UI support for session, user, and metadata is coming soon in an upcoming phoenix release (https://github.com/Arize-ai/phoenix/issues/2619)
Install Core Instrumentation Package
Install the core instrumentation package:
Specifying a session
Sessions are not currently supported in Phoenix and only supported via the Arize OTel collector. Support for sessions in Phoenix is coming in an upcoming release.
We provide a using_session
context manager to add session a ID to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the session ID as a span attribute, following the OpenInference semantic conventions. Its input, the session ID, must be a non-empty string.
It can also be used as a decorator:
Specifying users
We provide a using_user
context manager to add user ID to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the user ID as a span attribute, following the OpenInference semantic conventions. Its input, the user ID, must be a non-empty string.
It can also be used as a decorator:
Specifying Metadata
We provide a using_metadata
context manager to add metadata to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the metadata as a span attribute, following the OpenInference semantic conventions. Its input, the metadata, must be a dictionary with string keys. This dictionary will be serialized to JSON when saved to the OTEL Context and remain a JSON string when sent as a span attribute.
It can also be used as a decorator:
Specifying Tags
We provide a using_tags
context manager to add tags to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the tags as a span attribute, following the OpenInference semantic conventions. The input, the tag list, must be a list of strings.
It can also be used as a decorator:
Customizing Attributes
We provide a using_attributes
context manager to add attributes to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the attributes fields as span attributes, following the OpenInference semantic conventions. This is a convenient context manager to use if you find yourself using many of the previous ones in conjunction.
The previous example is equivalent to doing the following, making using_attributes
a very convenient tool for the more complex settings.
It can also be used as a decorator:
Span Processing
The tutorials and code snippets in these docs default to the SimpleSpanProcessor.
A SimpleSpanProcessor
processes and exports spans as they are created. This means that if you create 5 spans, each will be processed and exported before the next span is created in code. This can be helpful in scenarios where you do not want to risk losing a batch, or if you’re experimenting with OpenTelemetry in development. However, it also comes with potentially significant overhead, especially if spans are being exported over a network - each time a call to create a span is made, it would be processed and sent over a network before your app’s execution could continue.
The BatchSpanProcessor
processes spans in batches before they are exported. This is usually the right processor to use for an application in production but it does mean spans may take some time to show up in Phoenix.
In production we recommend the BatchSpanProcessor
over SimpleSpanProcessor
when deployed and the SimpleSpanProcessor
when developing.
Last updated