Phoenix
TypeScript APIPython APICommunityGitHubPhoenix Cloud
English
  • Documentation
  • Self-Hosting
  • Cookbooks
  • Learn
  • Integrations
  • SDK and API Reference
  • Release Notes
English
  • Arize Phoenix
  • Quickstarts
  • User Guide
  • Environments
  • Phoenix Demo
  • 🔭Tracing
    • Overview: Tracing
    • Quickstart: Tracing
      • Quickstart: Tracing (Python)
      • Quickstart: Tracing (TS)
    • Features: Tracing
      • Projects
      • Annotations
      • Sessions
    • Integrations: Tracing
    • How-to: Tracing
      • Setup Tracing
        • Setup using Phoenix OTEL
        • Setup using base OTEL
        • Using Phoenix Decorators
        • Setup Tracing (TS)
        • Setup Projects
        • Setup Sessions
      • Add Metadata
        • Add Attributes, Metadata, Users
        • Instrument Prompt Templates and Prompt Variables
      • Annotate Traces
        • Annotating in the UI
        • Annotating via the Client
        • Running Evals on Traces
        • Log Evaluation Results
      • Importing & Exporting Traces
        • Import Existing Traces
        • Export Data & Query Spans
        • Exporting Annotated Spans
      • Advanced
        • Mask Span Attributes
        • Suppress Tracing
        • Filter Spans to Export
        • Capture Multimodal Traces
    • Concepts: Tracing
      • How Tracing Works
      • What are Traces
      • Concepts: Annotations
      • FAQs: Tracing
  • 📃Prompt Engineering
    • Overview: Prompts
      • Prompt Management
      • Prompt Playground
      • Span Replay
      • Prompts in Code
    • Quickstart: Prompts
      • Quickstart: Prompts (UI)
      • Quickstart: Prompts (Python)
      • Quickstart: Prompts (TS)
    • How to: Prompts
      • Configure AI Providers
      • Using the Playground
      • Create a prompt
      • Test a prompt
      • Tag a prompt
      • Using a prompt
    • Concepts: Prompts
  • 🗄️Datasets & Experiments
    • Overview: Datasets & Experiments
    • Quickstart: Datasets & Experiments
    • How-to: Datasets
      • Creating Datasets
      • Exporting Datasets
    • Concepts: Datasets
    • How-to: Experiments
      • Run Experiments
      • Using Evaluators
  • 🧠Evaluation
    • Overview: Evals
      • Agent Evaluation
    • Quickstart: Evals
    • How to: Evals
      • Pre-Built Evals
        • Hallucinations
        • Q&A on Retrieved Data
        • Retrieval (RAG) Relevance
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Reference (citation) Link
        • User Frustration
        • SQL Generation Eval
        • Agent Function Calling Eval
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Audio Emotion Detection
      • Eval Models
      • Build an Eval
      • Build a Multimodal Eval
      • Online Evals
      • Evals API Reference
    • Concepts: Evals
      • LLM as a Judge
      • Eval Data Types
      • Evals With Explanations
      • Evaluators
      • Custom Task Evaluation
  • 🔍Retrieval
    • Overview: Retrieval
    • Quickstart: Retrieval
    • Concepts: Retrieval
      • Retrieval with Embeddings
      • Benchmarking Retrieval
      • Retrieval Evals on Document Chunks
  • 🌌inferences
    • Quickstart: Inferences
    • How-to: Inferences
      • Import Your Data
        • Prompt and Response (LLM)
        • Retrieval (RAG)
        • Corpus Data
      • Export Data
      • Generate Embeddings
      • Manage the App
      • Use Example Inferences
    • Concepts: Inferences
    • API: Inferences
    • Use-Cases: Inferences
      • Embeddings Analysis
  • ⚙️Settings
    • Access Control (RBAC)
    • API Keys
    • Data Retention
Powered by GitBook

Platform

  • Tracing
  • Prompts
  • Datasets and Experiments
  • Evals

Software

  • Python Client
  • TypeScript Client
  • Phoenix Evals
  • Phoenix Otel

Resources

  • Container Images
  • X
  • Blue Sky
  • Blog

Integrations

  • OpenTelemetry
  • AI Providers

© 2025 Arize AI

On this page
  • View Available Inferences
  • Download Your Inference set of Choice
  • Inspect Your Inferences
  • Launch the App
  • View Available Traces

Was this helpful?

Edit on GitHub
  1. inferences
  2. How-to: Inferences

Use Example Inferences

Quickly explore Phoenix with concrete examples

Phoenix ships with a collection of examples so you can quickly try out the app on concrete use-cases. This guide shows you how to download, inspect, and launch the app with example inferences.

View Available Inferences

To see a list of inferences available for download, run

px.load_example?

This displays the docstring for the phoenix.load_example function, which contain a list of inferences available for download.

Download Your Inference set of Choice

Choose the name of an inference set to download and pass it as an argument to phoenix.load_example. For example, run the following to download production and training data for our demo sentiment classification model:

inferences = px.load_example("sentiment_classification_language_drift")
inferences

px.load_example returns your downloaded data in the form of an ExampleInferences instance. After running the code above, you should see the following in your cell output.

ExampleInferences(primary=<Inferences "sentiment_classification_language_drift_primary">, reference=<Inferences "sentiment_classification_language_drift_reference">)

Inspect Your Inferences

Next, inspect the name, dataframe, and schema that define your primary inferences. First, run

prim_ds = inferences.primary
prim_ds.name

to see the name of the inferences in your cell output:

'sentiment_classification_language_drift_primary'

Next, run

prim_ds.schema

to see your inferences' schema in the cell output:

Schema(prediction_id_column_name='prediction_id', timestamp_column_name='prediction_ts', feature_column_names=['reviewer_age', 'reviewer_gender', 'product_category', 'language'], tag_column_names=None, prediction_label_column_name='pred_label', prediction_score_column_name=None, actual_label_column_name='label', actual_score_column_name=None, embedding_feature_column_names={'text_embedding': EmbeddingColumnNames(vector_column_name='text_vector', raw_data_column_name='text', link_to_data_column_name=None)}, excluded_column_names=None)

Last, run

prim_ds.dataframe.info()

to get an overview of your inferences's underlying dataframe:

<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 33411 entries, 2022-05-01 07:00:16+00:00 to 2022-06-01 07:00:16+00:00
Data columns (total 10 columns):
 #   Column            Non-Null Count  Dtype
---  ------            --------------  -----
 0   prediction_ts     33411 non-null  datetime64[ns, UTC]
 1   reviewer_age      33411 non-null  int16
 2   reviewer_gender   33411 non-null  object
 3   product_category  33411 non-null  object
 4   language          33411 non-null  object
 5   text              33411 non-null  object
 6   text_vector       33411 non-null  object
 7   label             33411 non-null  object
 8   pred_label        33411 non-null  object
 9   prediction_id     0 non-null      object
dtypes: datetime64[ns, UTC](1), int16(1), object(8)
memory usage: 2.6+ MB

Launch the App

Launch Phoenix with

px.launch_app(inferences.primary, inferences.reference)

Follow the instructions in the cell output to open the Phoenix UI in your notebook or in a separate browser tab.

View Available Traces

px.load_example_traces?

# Load up the LlamaIndex RAG example
px.launch_app(trace=px.load_example_traces("llama_index_rag"))
PreviousManage the AppNextConcepts: Inferences

Last updated 1 year ago

Was this helpful?

Phoenix supports and has examples that you can take a look at as well.\

🌌
LLM application Traces