import os
# Add Phoenix API Key for tracing
PHOENIX_API_KEY = "ADD YOUR API KEY"
os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={PHOENIX_API_KEY}"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "https://app.phoenix.arize.com"
Your Phoenix API key can be found on the Keys section of your .
Launch your local Phoenix instance:
pip install arize-phoenix
phoenix serve
For details on customizing a local terminal deployment, see .
Install packages:
pip install arize-phoenix-otel
Set your Phoenix endpoint:
import os
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:6006"
See for more details
docker pull arizephoenix/phoenix:latest
Run your containerized instance:
docker run -p 6006:6006 arizephoenix/phoenix:latest
This will expose the Phoenix on localhost:6006
Install packages:
pip install arize-phoenix-otel
Set your Phoenix endpoint:
import os
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:6006"
Be sure you also install the OpenInference library for the underlying model you're using along with Instructor. For example, if you're using OpenAI calls directly, you would also add: openinference-instrumentation-openai
Setup
Connect to your Phoenix instance using the register function.
from phoenix.otel import register
# configure the Phoenix tracer
tracer_provider = register(
project_name="my-llm-app", # Default is 'default'
auto_instrument=True # Auto-instrument your app based on installed OI dependencies
)
Run Instructor
From here you can use instructor as normal.
import instructor
from pydantic import BaseModel
from openai import OpenAI
# Define your desired output structure
class UserInfo(BaseModel):
name: str
age: int
# Patch the OpenAI client
client = instructor.from_openai(OpenAI())
# Extract structured data from natural language
user_info = client.chat.completions.create(
model="gpt-3.5-turbo",
response_model=UserInfo,
messages=[{"role": "user", "content": "John Doe is 30 years old."}],
)
print(user_info.name)
#> John Doe
print(user_info.age)
#> 30
Observe
Now that you have tracing setup, all invocations of your underlying model (completions, chat completions, embeddings) and instructor triggers will be streamed to your running Phoenix for observability and evaluation.
Resources
Pull latest Phoenix image from:
For more info on using Phoenix with Docker, see .
By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See or use one of the other deployment options to retain traces.