Phoenix
TypeScript APIPython APICommunityGitHubPhoenix Cloud
English
  • Documentation
  • Self-Hosting
  • Cookbooks
  • Integrations
  • SDK and API Reference
  • Release Notes
  • Resources
English
  • Arize Phoenix
  • Quickstarts
  • User Guide
  • Environments
  • Phoenix Demo
  • 🔭Tracing
    • Overview: Tracing
    • Quickstart: Tracing
      • Quickstart: Tracing (Python)
      • Quickstart: Tracing (TS)
    • Features: Tracing
      • Projects
      • Annotations
      • Sessions
    • Integrations: Tracing
    • How-to: Tracing
      • Setup Tracing
        • Setup using Phoenix OTEL
        • Setup using base OTEL
        • Using Phoenix Decorators
        • Setup Tracing (TS)
        • Setup Projects
        • Setup Sessions
      • Add Metadata
        • Add Attributes, Metadata, Users
        • Instrument Prompt Templates and Prompt Variables
      • Annotate Traces
        • Annotating in the UI
        • Annotating via the Client
        • Running Evals on Traces
        • Log Evaluation Results
      • Importing & Exporting Traces
        • Import Existing Traces
        • Export Data & Query Spans
        • Exporting Annotated Spans
      • Advanced
        • Mask Span Attributes
        • Suppress Tracing
        • Filter Spans to Export
        • Capture Multimodal Traces
    • Concepts: Tracing
      • How Tracing Works
      • What are Traces
      • Concepts: Annotations
      • FAQs: Tracing
  • 📃Prompt Engineering
    • Overview: Prompts
      • Prompt Management
      • Prompt Playground
      • Span Replay
      • Prompts in Code
    • Quickstart: Prompts
      • Quickstart: Prompts (UI)
      • Quickstart: Prompts (Python)
      • Quickstart: Prompts (TS)
    • How to: Prompts
      • Configure AI Providers
      • Using the Playground
      • Create a prompt
      • Test a prompt
      • Tag a prompt
      • Using a prompt
    • Concepts: Prompts
  • 🗄️Datasets & Experiments
    • Overview: Datasets & Experiments
    • Quickstart: Datasets & Experiments
    • How-to: Datasets
      • Creating Datasets
      • Exporting Datasets
    • Concepts: Datasets
    • How-to: Experiments
      • Run Experiments
      • Using Evaluators
  • 🧠Evaluation
    • Overview: Evals
      • Agent Evaluation
    • Quickstart: Evals
    • How to: Evals
      • Pre-Built Evals
        • Hallucinations
        • Q&A on Retrieved Data
        • Retrieval (RAG) Relevance
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Reference (citation) Link
        • User Frustration
        • SQL Generation Eval
        • Agent Function Calling Eval
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Audio Emotion Detection
      • Eval Models
      • Build an Eval
      • Build a Multimodal Eval
      • Online Evals
      • Evals API Reference
    • Concepts: Evals
      • LLM as a Judge
      • Eval Data Types
      • Evals With Explanations
      • Evaluators
      • Custom Task Evaluation
  • 🔍Retrieval
    • Overview: Retrieval
    • Quickstart: Retrieval
    • Concepts: Retrieval
      • Retrieval with Embeddings
      • Benchmarking Retrieval
      • Retrieval Evals on Document Chunks
  • 🌌inferences
    • Quickstart: Inferences
    • How-to: Inferences
      • Import Your Data
        • Prompt and Response (LLM)
        • Retrieval (RAG)
        • Corpus Data
      • Export Data
      • Generate Embeddings
      • Manage the App
      • Use Example Inferences
    • Concepts: Inferences
    • API: Inferences
    • Use-Cases: Inferences
      • Embeddings Analysis
  • ⚙️Settings
    • Access Control (RBAC)
    • API Keys
    • Data Retention
Powered by GitBook

Platform

  • Tracing
  • Prompts
  • Datasets and Experiments
  • Evals

Software

  • Python Client
  • TypeScript Client
  • Phoenix Evals
  • Phoenix Otel

Resources

  • Container Images
  • X
  • Blue Sky
  • Blog

Integrations

  • OpenTelemetry
  • AI Providers

© 2025 Arize AI

On this page
  • Overview
  • Launch Phoenix
  • Using Phoenix Cloud
  • Using Self-hosted Phoenix
  • Connect to Phoenix
  • Trace your own functions
  • Trace all calls made to a library
  • View your Traces in Phoenix
  • Next Steps

Was this helpful?

Edit on GitHub
  1. Tracing
  2. Quickstart: Tracing

Quickstart: Tracing (Python)

PreviousQuickstart: TracingNextQuickstart: Tracing (TS)

Last updated 1 day ago

Was this helpful?

Overview

Phoenix supports three main options to collect traces:

  1. Use to mark functions and code blocks.

  2. Use to capture all calls made to supported frameworks.

  3. Use instrumentation. Supported in and , among many other languages.

This example uses options 1 and 2.

Launch Phoenix

Using Phoenix Cloud

  1. Sign up for an Arize Phoenix account at

  2. Grab your API key from the Keys option on the left bar.

  3. In your code, set your endpoint and API key:

import os

# Add Phoenix API Key for tracing
PHOENIX_API_KEY = "ADD YOUR API KEY"
PHOENIX_ENDPOINT = "https://app.phoenix.arize.com/v1/traces"

os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={PHOENIX_API_KEY}"

Using Self-hosted Phoenix

  1. In your code, set your endpoint:

import os

# Update this with your self-hosted endpoint
PHOENIX_ENDPOINT = "http://0.0.0.0:6006/v1/traces"

Connect to Phoenix

To collect traces from your application, you must configure an OpenTelemetry TracerProvider to send traces to Phoenix.

pip install arize-phoenix-otel
from phoenix.otel import register

# configure the Phoenix tracer
tracer_provider = register(
  project_name="my-llm-app", # Default is 'default'
  auto_instrument=True, # See 'Trace all calls made to a library' below
  endpoint=PHOENIX_ENDPOINT,
)
tracer = tracer_provider.get_tracer(__name__)

Trace your own functions

Functions can be traced using decorators:

@tracer.chain
def my_func(input: str) -> str:
    return "output"

Input and output attributes are set automatically based on my_func's parameters and return.

Trace all calls made to a library

pip install openinference-instrumentation-openai

OpenInference libraries must be installed before calling the register function

# Add OpenAI API Key
import os
import openai

os.environ["OPENAI_API_KEY"] = "ADD YOUR OPENAI API KEY"

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Write a haiku."}],
)
print(response.choices[0].message.content)

View your Traces in Phoenix

You should now see traces in Phoenix!

Next Steps

Run Phoenix using Docker, local terminal, Kubernetes etc. For more information, .

Phoenix can also capture all calls made to supported libraries automatically. Just install the :

Explore tracing

View use cases to see

🔭
Phoenix's decorators
automatic instrumentation
base OpenTelemetry
Python
TS / JS
https://app.phoenix.arize.com/login
respective OpenInference library
integrations
Customize tracing
end-to-end examples
see self-hosting