Customize Tracing

Using context to customize spans

In order to customize spans that are created via auto-instrumentation, The Otel Context can be used to set span attributes created during a block of code (think child spans or spans under that block of code). Our openinference packages offer convenient tools to write and read from the OTel Context. The benefit of this approach is that OpenInference auto instrumentors will pass (e.g. inherit) these attributes to all spans underneath a parent trace.

Supported Context Attributes include:

  • Session ID* Unique identifier for a session

  • User ID* Unique identifier for a user.

  • Metadata Metadata associated with a span.

  • Tags* List of tags to give the span a category.

  • Prompt Template

    • Template Used to generate prompts as Python f-strings.

    • Version The version of the prompt template.

    • Variables key-value pairs applied to the prompt template.

*UI support for session, user, and metadata is coming soon in an upcoming phoenix release (https://github.com/Arize-ai/phoenix/issues/2619)

Install Core Instrumentation Package

Install the core instrumentation package:

pip install openinference-instrumentation

Specifying a session

We provide a using_session context manager to add session a ID to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the session ID as a span attribute, following the OpenInference semantic conventions. Its input, the session ID, must be a non-empty string.

from openinference.instrumentation import using_session

with using_session(session_id="my-session-id"):
    # Calls within this block will generate spans with the attributes:
    # "session.id" = "my-session-id"
    ...

It can also be used as a decorator:

@using_session(session_id="my-session-id")
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "session.id" = "my-session-id"
    ...

Specifying users

We provide a using_user context manager to add user ID to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the user ID as a span attribute, following the OpenInference semantic conventions. Its input, the user ID, must be a non-empty string.

from openinference.instrumentation import using_user
with using_user("my-user-id"):
    # Calls within this block will generate spans with the attributes:
    # "user.id" = "my-user-id"
    ...

It can also be used as a decorator:

@using_user("my-user-id")
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "user.id" = "my-user-id"
    ...

Specifying Metadata

We provide a using_metadata context manager to add metadata to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the metadata as a span attribute, following the OpenInference semantic conventions. Its input, the metadata, must be a dictionary with string keys. This dictionary will be serialized to JSON when saved to the OTEL Context and remain a JSON string when sent as a span attribute.

from openinference.instrumentation import using_metadata
metadata = {
    "key-1": value_1,
    "key-2": value_2,
    ...
}
with using_metadata(metadata):
    # Calls within this block will generate spans with the attributes:
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    ...

It can also be used as a decorator:

@using_metadata(metadata)
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    ...

Specifying Tags

We provide a using_tags context manager to add tags to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the tags as a span attribute, following the OpenInference semantic conventions. The input, the tag list, must be a list of strings.

from openinference.instrumentation import using_tags
tags = ["tag_1", "tag_2", ...]
with using_tags(tags):
    # Calls within this block will generate spans with the attributes:
    # "tag.tags" = "["tag_1","tag_2",...]"
    ...

It can also be used as a decorator:

@using_tags(tags)
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "tag.tags" = "["tag_1","tag_2",...]"
    ...

Customizing Attributes

We provide a using_attributes context manager to add attributes to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the attributes fields as span attributes, following the OpenInference semantic conventions. This is a convenient context manager to use if you find yourself using many of the previous ones in conjunction.

from openinference.instrumentation import using_attributes
tags = ["tag_1", "tag_2", ...]
metadata = {
    "key-1": value_1,
    "key-2": value_2,
    ...
}
prompt_template = "Please describe the weather forecast for {city} on {date}"
prompt_template_variables = {"city": "Johannesburg", "date":"July 11"}
prompt_template_version = "v1.0"
with using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
    metadata=metadata,
    tags=tags,
    prompt_template=prompt_template,
    prompt_template_version=prompt_template_version,
    prompt_template_variables=prompt_template_variables,
):
    # Calls within this block will generate spans with the attributes:
    # "session.id" = "my-session-id"
    # "user.id" = "my-user-id"
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    # "tag.tags" = "["tag_1","tag_2",...]"
    # "llm.prompt_template.template" = "Please describe the weather forecast for {city} on {date}"
    # "llm.prompt_template.variables" = "{\"city\": \"Johannesburg\", \"date\": \"July 11\"}" # JSON serialized
    # "llm.prompt_template.version " = "v1.0"
    ...

The previous example is equivalent to doing the following, making using_attributes a very convenient tool for the more complex settings.

with (
    using_session("my-session-id"),
    using_user("my-user-id"),
    using_metadata(metadata),
    using_tags(tags),
    using_prompt_template(
        template=prompt_template,
        version=prompt_template_version,
        variables=prompt_template_variables,
    ),
):
    # Calls within this block will generate spans with the attributes:
    # "session.id" = "my-session-id"
    # "user.id" = "my-user-id"
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    # "tag.tags" = "["tag_1","tag_2",...]"
    # "llm.prompt_template.template" = "Please describe the weather forecast for {city} on {date}"
    # "llm.prompt_template.variables" = "{\"city\": \"Johannesburg\", \"date\": \"July 11\"}" # JSON serialized
    # "llm.prompt_template.version " = "v1.0"
    ...

It can also be used as a decorator:

@using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
    metadata=metadata,
    tags=tags,
    prompt_template=prompt_template,
    prompt_template_version=prompt_template_version,
    prompt_template_variables=prompt_template_variables,
)
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "session.id" = "my-session-id"
    # "user.id" = "my-user-id"
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    # "tag.tags" = "["tag_1","tag_2",...]"
    # "llm.prompt_template.template" = "Please describe the weather forecast for {city} on {date}"
    # "llm.prompt_template.variables" = "{\"city\": \"Johannesburg\", \"date\": \"July 11\"}" # JSON serialized
    # "llm.prompt_template.version " = "v1.0"
    ...

Span Processing

The tutorials and code snippets in these docs default to the SimpleSpanProcessor. A SimpleSpanProcessor processes and exports spans as they are created. This means that if you create 5 spans, each will be processed and exported before the next span is created in code. This can be helpful in scenarios where you do not want to risk losing a batch, or if you’re experimenting with OpenTelemetry in development. However, it also comes with potentially significant overhead, especially if spans are being exported over a network - each time a call to create a span is made, it would be processed and sent over a network before your app’s execution could continue.

The BatchSpanProcessor processes spans in batches before they are exported. This is usually the right processor to use for an application in production but it does mean spans may take some time to show up in Phoenix.

In production we recommend the BatchSpanProcessor over SimpleSpanProcessor when deployed and the SimpleSpanProcessor when developing.

Last updated