Instrument: Python
While the spans created via Phoenix and OpenInference create a solid foundation for tracing your application, sometimes you need to create and customize your LLM spans
Last updated
While the spans created via Phoenix and OpenInference create a solid foundation for tracing your application, sometimes you need to create and customize your LLM spans
Last updated
Phoenix and OpenInference use the OpenTelemetry Trace API to create spans. Because Phoenix supports OpenTelemetry, this means that you can perform manual instrumentation, no LLM framework required! This guide will help you understand how to create and customize spans using the OpenTelemetry Trace API.
See here for an end-to-end example of a manually instrumented application.
First, ensure you have the API and SDK packages:
Let's next install the OpenInference Semantic Conventions package so that we can construct spans with LLM semantic conventions:
For full documentation on the OpenInference semantic conventions, please consult the specification
Configuring an OTel tracer involves some boilerplate code that the instrumentors in phoenix.trace
take care of for you. If you're manually instrumenting your application, you'll need to implement this boilerplate yourself:
This snippet contains a few OTel concepts:
A resource represents an origin (e.g., a particular service, or in this case, a project) from which your spans are emitted.
Span processors filter, batch, and perform operations on your spans prior to export.
Your tracer provides a handle for you to create spans and add attributes in your application code.
The collector (e.g., Phoenix) receives the spans exported by your application.
To create a span, you'll typically want it to be started as the current span.
You can also use start_span
to create a span without making it the current span. This is usually done to track concurrent or asynchronous operations.
If you have a distinct sub-operation you'd like to track as a part of another one, you can create span to represent the relationship:
When you view spans in a trace visualization tool, child
will be tracked as a nested span under parent
.
It's common to have a single span track the execution of an entire function. In that scenario, there is a decorator you can use to reduce code:
Use of the decorator is equivalent to creating the span inside do_work()
and ending it when do_work()
is finished.
To use the decorator, you must have a tracer
instance in scope for your function declaration.
If you need to add attributes or events then it's less convenient to use a decorator.
Sometimes it's helpful to access whatever the current span is at a point in time so that you can enrich it with more information.
Attributes let you attach key/value pairs to a spans so it carries more information about the current operation that it's tracking.
Notice above that the attributes have a specific prefix operation
. When adding custom attributes, it's best practice to vendor your attributes (e.x. mycompany.
) so that your attributes do not clash with semantic conventions.
Semantic attributes are pre-defined attributes that are well-known naming conventions for common kinds of data. Using semantic attributes lets you normalize this kind of information across your systems. In the case of Phoenix, the OpenInference Semantic Conventions package provides a set of well-known attributes that are used to represent LLM application specific semantic conventions.
To use OpenInference Semantic Attributes in Python, ensure you have the semantic conventions package:
Then you can use it in code:
Events are human-readable messages that represent "something happening" at a particular moment during the lifetime of a span. You can think of it as a primitive log.
The span status allows you to signal the success or failure of the code executed within the span.
It can be a good idea to record exceptions when they happen. It’s recommended to do this in conjunction with setting span status.