Customize Traces

Oftentimes you want to customize various aspects of traces you log to Phoenix

Log to a specific project

Phoenix uses projects to group traces. If left unspecified, all traces are sent to a default project.

In the notebook, you can set the PHOENIX_PROJECT_NAME environment variable before adding instrumentation or running any of your code.

In python this would look like:

import os

os.environ['PHOENIX_PROJECT_NAME'] = "<your-project-name>"

Note that setting a project via an environment variable only works in a notebook and must be done BEFORE instrumentation is initialized. If you are using OpenInference Instrumentation, see the Server tab for how to set the project name in the Resource attributes.

Projects work by setting something called the Resource attributes (as seen in the Server example above). The phoenix server uses the project name attribute to group traces into the appropriate project.

Switching projects in a notebook

Typically you want traces for an LLM app to all be grouped in one project. However, while working with Phoenix inside a notebook, we provide a utility to temporarily associate spans with different projects. You can use this to trace things like evaluations.

from phoenix.trace import using_project

# Switch project to run evals
with using_project("my-eval-project"):
    # all spans created within this context will be associated with
    # the "my-eval-project" project.
    # Run evaluations here...

Adding custom metadata to spans

Spans produced by auto-instrumentation can get you very far. However at some point you may want to track metadata - things like account or user info.

With LangChain, you can provide metadata directly via the chain or to to an invocation of a chain.

# Pass metadata into the chain
llm = LLMChain(llm=OpenAI(), prompt=prompt, metadata={"category": "jokes"})

# Pass metadata into the invocation
completion = llm.predict(adjective="funny", metadata={"variant": "funny"})
print(completion)

Last updated