Phoenix
TypeScript APIPython APICommunityGitHubPhoenix Cloud
English
  • Documentation
  • Self-Hosting
  • Cookbooks
  • Learn
  • Integrations
  • SDK and API Reference
  • Release Notes
English
  • Arize Phoenix
  • Quickstarts
  • User Guide
  • Environments
  • Phoenix Demo
  • 🔭Tracing
    • Overview: Tracing
    • Quickstart: Tracing
      • Quickstart: Tracing (Python)
      • Quickstart: Tracing (TS)
    • Features: Tracing
      • Projects
      • Annotations
      • Sessions
    • Integrations: Tracing
    • How-to: Tracing
      • Setup Tracing
        • Setup using Phoenix OTEL
        • Setup using base OTEL
        • Using Phoenix Decorators
        • Setup Tracing (TS)
        • Setup Projects
        • Setup Sessions
      • Add Metadata
        • Add Attributes, Metadata, Users
        • Instrument Prompt Templates and Prompt Variables
      • Annotate Traces
        • Annotating in the UI
        • Annotating via the Client
        • Running Evals on Traces
        • Log Evaluation Results
      • Importing & Exporting Traces
        • Import Existing Traces
        • Export Data & Query Spans
        • Exporting Annotated Spans
      • Advanced
        • Mask Span Attributes
        • Suppress Tracing
        • Filter Spans to Export
        • Capture Multimodal Traces
    • Concepts: Tracing
      • How Tracing Works
      • What are Traces
      • Concepts: Annotations
      • FAQs: Tracing
  • 📃Prompt Engineering
    • Overview: Prompts
      • Prompt Management
      • Prompt Playground
      • Span Replay
      • Prompts in Code
    • Quickstart: Prompts
      • Quickstart: Prompts (UI)
      • Quickstart: Prompts (Python)
      • Quickstart: Prompts (TS)
    • How to: Prompts
      • Configure AI Providers
      • Using the Playground
      • Create a prompt
      • Test a prompt
      • Tag a prompt
      • Using a prompt
    • Concepts: Prompts
  • 🗄️Datasets & Experiments
    • Overview: Datasets & Experiments
    • Quickstart: Datasets & Experiments
    • How-to: Datasets
      • Creating Datasets
      • Exporting Datasets
    • Concepts: Datasets
    • How-to: Experiments
      • Run Experiments
      • Using Evaluators
  • 🧠Evaluation
    • Overview: Evals
      • Agent Evaluation
    • Quickstart: Evals
    • How to: Evals
      • Pre-Built Evals
        • Hallucinations
        • Q&A on Retrieved Data
        • Retrieval (RAG) Relevance
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Reference (citation) Link
        • User Frustration
        • SQL Generation Eval
        • Agent Function Calling Eval
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Audio Emotion Detection
      • Eval Models
      • Build an Eval
      • Build a Multimodal Eval
      • Online Evals
      • Evals API Reference
    • Concepts: Evals
      • LLM as a Judge
      • Eval Data Types
      • Evals With Explanations
      • Evaluators
      • Custom Task Evaluation
  • 🔍Retrieval
    • Overview: Retrieval
    • Quickstart: Retrieval
    • Concepts: Retrieval
      • Retrieval with Embeddings
      • Benchmarking Retrieval
      • Retrieval Evals on Document Chunks
  • 🌌inferences
    • Quickstart: Inferences
    • How-to: Inferences
      • Import Your Data
        • Prompt and Response (LLM)
        • Retrieval (RAG)
        • Corpus Data
      • Export Data
      • Generate Embeddings
      • Manage the App
      • Use Example Inferences
    • Concepts: Inferences
    • API: Inferences
    • Use-Cases: Inferences
      • Embeddings Analysis
  • ⚙️Settings
    • Access Control (RBAC)
    • API Keys
    • Data Retention
Powered by GitBook

Platform

  • Tracing
  • Prompts
  • Datasets and Experiments
  • Evals

Software

  • Python Client
  • TypeScript Client
  • Phoenix Evals
  • Phoenix Otel

Resources

  • Container Images
  • X
  • Blue Sky
  • Blog

Integrations

  • OpenTelemetry
  • AI Providers

© 2025 Arize AI

On this page
  • Setup Tracing
  • Customize Traces & Spans
  • Auto Instrumentation
  • Manual Instrumentation
  • Instrument: Python using OpenInference Helpers
  • Instrument: Python using Base OTEL
  • Querying Spans
  • Annotate Traces
  • Log Evaluation Results
  • Save and Load Traces

Was this helpful?

Edit on GitHub
  1. Tracing

How-to: Tracing

Guides on how to use traces

PreviousIntegrations: TracingNextSetup Tracing

Last updated 16 days ago

Was this helpful?

Setup Tracing

  • Setup Tracing in or

  • Add Integrations via

  • your application

How to set custom attributes and semantic attributes to child spans and spans created by auto-instrumentors.

Manual Instrumentation

Create and customize spans for your use-case

Setup Tracing (TS)

How to query spans for to construct DataFrames to use for evaluation

How to log evaluation results to annotate traces with evals

Phoenix natively works with a variety of frameworks and SDKs across and via OpenTelemetry auto-instrumentation. Phoenix can also be natively integrated with AI platforms such as and .

🔭
Instrument: Python using OpenInference Helpers
Querying Spans
Annotate Traces
Annotating in the UI
Annotating via the Client
Save and Load Traces
Saving Traces
Loading Traces
Instrument: Python using Base OTEL
Log Evaluation Results
Typescript
Auto Instrumentation
Customize Traces & Spans
How to track sessions
How to create custom spans
Masking attributes on spans
Auto Instrumentation
Python
Manually Instrument
Python
JavaScript
How to acquire a Tracer
How to create spans
How to create nested spans
How to create spans with decorators
How to get the current span
How to add attributes to a span
How to add semantic attributes
How to add events
How to set a span's status
How to record exceptions
How to log span evaluations
How to log document evaluations
How to specify a project for logging evaluations
LangFlow
LiteLLM proxy
How to run a query
How to specify a project
How to query for documents
How to apply filters
How to extract attributes
How to use data for evaluation
How to use pre-defined queries
Setting metadata
Setting tags
Setting a user
Setting prompt template attributes
How to read attributes from context