Setup Tracing: Python

How to configure OpenTelemetry and connect to the Phoenix server

Phoenix uses OTLP (OpenTelemetry Language Protocol) to receive traces from your phoenix instance. To make this process as simple as possible, we've created a python package called arize-phoenix-otel for python.

Note that you do not need to use arize-phoenix-otel to setup OpenTelemetry. If you wold like to use pure OpenTelemetry, see Using OTEL Python Directly

Install the arize-phoenix-otel python package. This may be already installed.

pip install arize-phoenix-otel

If you have specified endpoints, headers, and project names as environment variables, setting up OTEL can be as simple as:

from phoenix.otel import register

# Configuration is picked up from your environment variables
tracer_provider = register()

# Initialize Instrumentors and pass in the tracer_provider
# E.x. OpenAIInstrumentor.instrument(tracer_provider=tracer_provider)

And setup is done! You are ready to setup integrations and instrumentation. Read further for more advanced configuration options.

Setup Endpoints, Projects, etc.

Register by default picks up your configuration from environment variables but you can configure it using arguments as well:

from phoenix.otel import register

tracer_provider = register(
    project_name="my-llm-app",
    endpoint="http:/localhost:4317"  # or http at "http://localhost:6006/v1/traces"
    headers={"api_key": "<your-api-key>"}, # E.x. credentials for app.phoenix.arize.com
)

When using the endpoint argument, we must pass in the fully qualified OTel endpoint. Phoenix provides two endpoits:

  • gRPC: more performant

    • by default exposed on port 4317: <PHOENIX_HOST>:4317

  • HTTP: simpler

    • by default exposed on port 6006 and /v1/traces: <PHOENIX_HOST>:6006/v1/traces

phoenix.otel can be further configured for things like batch span processing and specifying resources. For the full details of how to configure phoenix.otel, please consult the package repository (https://github.com/Arize-ai/phoenix/tree/main/packages/phoenix-otel)

Log to a specific project

Phoenix uses projects to group traces. If left unspecified, all traces are sent to a default project.

In the notebook, you can set the PHOENIX_PROJECT_NAME environment variable before adding instrumentation or running any of your code.

In python this would look like:

import os

os.environ['PHOENIX_PROJECT_NAME'] = "<your-project-name>"

Note that setting a project via an environment variable only works in a notebook and must be done BEFORE instrumentation is initialized. If you are using OpenInference Instrumentation, see the Server tab for how to set the project name in the Resource attributes.

Alternatively, you can set the project name in your register function call:

from phoenix.otel import register

tracer_provider = register(
    project_name="my-project-name",
    ....
)

Projects work by setting something called the Resource attributes (as seen in the OTEL example above). The phoenix server uses the project name attribute to group traces into the appropriate project.

Switching projects in a notebook

Typically you want traces for an LLM app to all be grouped in one project. However, while working with Phoenix inside a notebook, we provide a utility to temporarily associate spans with different projects. You can use this to trace things like evaluations.

from phoenix.trace import using_project

# Switch project to run evals
with using_project("my-eval-project"):
    # all spans created within this context will be associated with
    # the "my-eval-project" project.
    # Run evaluations here...

How to turn off tracing

Tracing can be paused temporarily or disabled permanently.

Pause tracing using context manager

If there is a section of your code for which tracing is not desired, e.g. the document chunking process, it can be put inside the suppress_tracing context manager as shown below.

from phoenix.trace import suppress_tracing

with suppress_tracing():
    # Code running inside this block doesn't generate traces.
    # For example, running LLM evals here won't generate additional traces.
    ...
# Tracing will resume outside the block.
...

Uninstrument the auto-instrumentors permanently

Calling .uninstrument() on the auto-instrumentors will remove tracing permanently. Below is the examples for LangChain, LlamaIndex and OpenAI, respectively.

LangChainInstrumentor().uninstrument()
LlamaIndexInstrumentor().uninstrument()
OpenAIInstrumentor().uninstrument()
# etc.

Last updated