Phoenix
TypeScript APIPython APICommunityGitHubPhoenix Cloud
English
  • Documentation
  • Self-Hosting
  • Cookbooks
  • SDK and API Reference
  • Release Notes
  • Resources
English
  • Arize Phoenix
  • Quickstarts
  • User Guide
  • Environments
  • Phoenix Demo
  • 🔭Tracing
    • Overview: Tracing
    • Quickstart: Tracing
      • Quickstart: Tracing (Python)
      • Quickstart: Tracing (TS)
    • Features: Tracing
      • Projects
      • Annotations
      • Sessions
    • Integrations: Tracing
      • OpenAI
      • OpenAI Agents SDK
      • LlamaIndex
      • LlamaIndex Workflows
      • LangChain
      • LangGraph
      • LiteLLM
      • Anthropic
      • Amazon Bedrock
      • Amazon Bedrock Agents
      • VertexAI
      • MistralAI
      • Google GenAI
      • Groq
      • Hugging Face smolagents
      • CrewAI
      • Haystack
      • DSPy
      • Instructor
      • OpenAI Node SDK
      • LangChain.js
      • Vercel AI SDK
      • LangFlow
      • BeeAI
      • Flowise
    • How-to: Tracing
      • Setup Tracing
        • Setup using Phoenix OTEL
        • Setup using base OTEL
        • Using Phoenix Decorators
        • Setup Tracing (TS)
        • Setup Projects
        • Setup Sessions
      • Add Metadata
        • Add Attributes, Metadata, Users
        • Instrument Prompt Templates and Prompt Variables
      • Feedback & Annotations
        • Capture Feedback on Traces
        • Evaluating Phoenix Traces
        • Log Evaluation Results
      • Importing & Exporting Traces
        • Import Existing Traces
        • Export Data & Query Spans
      • Advanced
        • Mask Span Attributes
        • Suppress Tracing
        • Filter Spans
        • Capture Multimodal Traces
    • Concepts: Tracing
      • How Tracing Works
      • FAQs: Tracing
      • What are Traces
  • 📃Prompt Engineering
    • Overview: Prompts
      • Prompt Management
      • Prompt Playground
      • Span Replay
      • Prompts in Code
    • Quickstart: Prompts
      • Quickstart: Prompts (UI)
      • Quickstart: Prompts (Python)
      • Quickstart: Prompts (TS)
    • How to: Prompts
      • Configure AI Providers
      • Using the Playground
      • Create a prompt
      • Test a prompt
      • Tag a prompt
      • Using a prompt
    • Concepts: Prompts
  • 🗄️Datasets & Experiments
    • Overview: Datasets & Experiments
    • Quickstart: Datasets & Experiments
    • How-to: Datasets
      • Creating Datasets
      • Exporting Datasets
    • Concepts: Datasets
    • How-to: Experiments
      • Run Experiments
      • Using Evaluators
  • 🧠Evaluation
    • Overview: Evals
      • Agent Evaluation
    • Quickstart: Evals
    • How to: Evals
      • Pre-Built Evals
        • Hallucinations
        • Q&A on Retrieved Data
        • Retrieval (RAG) Relevance
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Reference (citation) Link
        • User Frustration
        • SQL Generation Eval
        • Agent Function Calling Eval
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Audio Emotion Detection
      • Eval Models
      • Build an Eval
      • Build a Multimodal Eval
      • Online Evals
      • Evals API Reference
    • Concepts: Evals
      • LLM as a Judge
      • Eval Data Types
      • Evals With Explanations
      • Evaluators
      • Custom Task Evaluation
  • 🔍Retrieval
    • Overview: Retrieval
    • Quickstart: Retrieval
    • Concepts: Retrieval
      • Retrieval with Embeddings
      • Benchmarking Retrieval
      • Retrieval Evals on Document Chunks
  • 🌌inferences
    • Quickstart: Inferences
    • How-to: Inferences
      • Import Your Data
        • Prompt and Response (LLM)
        • Retrieval (RAG)
        • Corpus Data
      • Export Data
      • Generate Embeddings
      • Manage the App
      • Use Example Inferences
    • Concepts: Inferences
    • API: Inferences
    • Use-Cases: Inferences
      • Embeddings Analysis
  • 🔌INTEGRATIONS
    • Phoenix MCP Server
    • Cleanlab
    • Ragas
  • ⚙️Settings
    • Access Control (RBAC)
    • API Keys
    • Data Retention
Powered by GitBook

Platform

  • Tracing
  • Prompts
  • Datasets and Experiments
  • Evals

Software

  • Python Client
  • TypeScript Client
  • Phoenix Evals
  • Phoenix Otel

Resources

  • Container Images
  • X
  • Blue Sky
  • Blog

Integrations

  • OpenTelemetry
  • AI Providers

© 2025 Arize AI

On this page
  • Notebooks
  • Logging Retrievals to Phoenix (as Inferences)
  • Running Evaluations on your Retrievals
  • Q&A and Hallucination Evals
  • Retrieval Chunk Evals

Was this helpful?

Edit on GitHub
  1. Retrieval

Quickstart: Retrieval

Debug your Search and Retrieval LLM workflows

PreviousOverview: RetrievalNextConcepts: Retrieval

Last updated 29 days ago

Was this helpful?

This quickstart shows how to start logging your retrievals from your vector datastore to Phoenix and run evaluations.

Notebooks

Follow our tutorial in a notebook with our Langchain and LlamaIndex integrations

Framework
Phoenix Inferences
Phoenix Traces & Spans

LangChain

LlamaIndex

Logging Retrievals to Phoenix (as Inferences)

Step 1: Logging Knowledge Base

The first thing we need is to collect some sample from your vector store, to be able to compare against later. This is to able to see if some sections are not being retrieved, or some sections are getting a lot of traffic where you might want to beef up your context or documents in that area.

id
text
embedding

1

Voyager 2 is a spacecraft used by NASA to expl...

[-0.02785328, -0.04709944, 0.042922903, 0.0559...

corpus_schema = px.Schema(
    id_column_name="id",
    document_column_names=EmbeddingColumnNames(
        vector_column_name="embedding",
        raw_data_column_name="text",
    ),
)

Step 2: Logging Retrieval and Response

We also will be logging the prompt/response pairs from the deployed application.

query
embedding
retrieved_document_ids
relevance_scores
response

who was the first person that walked on the moon

[-0.0126, 0.0039, 0.0217, ...

[7395, 567965, 323794, ...

[11.30, 7.67, 5.85, ...

Neil Armstrong

primary_schema = Schema(
    prediction_id_column_name="id",
    prompt_column_names=RetrievalEmbeddingColumnNames(
        vector_column_name="embedding",
        raw_data_column_name="query",
        context_retrieval_ids_column_name="retrieved_document_ids",
        context_retrieval_scores_column_name="relevance_scores",
    )
    response_column_names="response",
)

Running Evaluations on your Retrievals

In order to run retrieval Evals the following code can be used for quick analysis of common frameworks of LangChain and LlamaIndex.

# Get traces from Phoenix into dataframe 

spans_df = px.active_session().get_spans_dataframe()
spans_df[["name", "span_kind", "attributes.input.value", "attributes.retrieval.documents"]].head()

from phoenix.session.evaluation import get_qa_with_reference, get_retrieved_documents

retrieved_documents_df = get_retrieved_documents(px.active_session())
queries_df = get_qa_with_reference(px.active_session())

Once the data is in a dataframe, evaluations can be run on the data. Evaluations can be run on on different spans of data. In the below example we run on the top level spans that represent a single trace.

Q&A and Hallucination Evals

from phoenix.trace import SpanEvaluations, DocumentEvaluations
from phoenix.evals import (
  HALLUCINATION_PROMPT_RAILS_MAP,
  HALLUCINATION_PROMPT_TEMPLATE,
  QA_PROMPT_RAILS_MAP,
  QA_PROMPT_TEMPLATE,
  OpenAIModel,
  llm_classify,
)

# Creating Hallucination Eval which checks if the application hallucinated
hallucination_eval = llm_classify(
  dataframe=queries_df,
  model=OpenAIModel("gpt-4-turbo-preview", temperature=0.0),
  template=HALLUCINATION_PROMPT_TEMPLATE,
  rails=list(HALLUCINATION_PROMPT_RAILS_MAP.values()),
  provide_explanation=True,  # Makes the LLM explain its reasoning
  concurrency=4,
)
hallucination_eval["score"] = (
  hallucination_eval.label[~hallucination_eval.label.isna()] == "factual"
).astype(int)

# Creating Q&A Eval which checks if the application answered the question correctly
qa_correctness_eval = llm_classify(
  dataframe=queries_df,
  model=OpenAIModel("gpt-4-turbo-preview", temperature=0.0),
  template=QA_PROMPT_TEMPLATE,
  rails=list(QA_PROMPT_RAILS_MAP.values()),
  provide_explanation=True,  # Makes the LLM explain its reasoning
  concurrency=4,
)

qa_correctness_eval["score"] = (
  hallucination_eval.label[~qa_correctness_eval.label.isna()] == "correct"
).astype(int)

# Logs the Evaluations back to the Phoenix User Interface (Optional)
px.Client().log_evaluations(
  SpanEvaluations(eval_name="Hallucination", dataframe=hallucination_eval),
  SpanEvaluations(eval_name="QA Correctness", dataframe=qa_correctness_eval),
)

The Evals are available in dataframe locally and can be materilazed back to the Phoenix UI, the Evals are attached to the referenced SpanIDs.

The snipit of code above links the Evals back to the spans they were generated against.

Retrieval Chunk Evals


from phoenix.evals import (
    RAG_RELEVANCY_PROMPT_RAILS_MAP,
    RAG_RELEVANCY_PROMPT_TEMPLATE,
    OpenAIModel,
    llm_classify,
)

retrieved_documents_eval = llm_classify(
    dataframe=retrieved_documents_df,
    model=OpenAIModel("gpt-4-turbo-preview", temperature=0.0),
    template=RAG_RELEVANCY_PROMPT_TEMPLATE,
    rails=list(RAG_RELEVANCY_PROMPT_RAILS_MAP.values()),
    provide_explanation=True,
)

retrieved_documents_eval["score"] = (
    retrieved_documents_eval.label[~retrieved_documents_eval.label.isna()] == "relevant"
).astype(int)

px.Client().log_evaluations(DocumentEvaluations(eval_name="Relevance", dataframe=retrieved_documents_eval))

The calculation is done using the LLM Eval on all chunks returned for the span and the log_evaluations connects the Evals back to the original spans.

Retrieval Analyzer w/ Embeddings

Traces and Spans

Retrieval Analyzer w/ Embeddings

Traces and Spans

For more details, visit this .

For more details, visit this .

Independent of the framework you are instrumenting, Phoenix traces allow you to get retrieval data in a common dataframe format that follows the specification.

This example shows how to run Q&A and Hallucnation Evals with OpenAI (many other are available including Anthropic, Mixtral/Mistral, Gemini, OpenAI Azure, Bedrock, etc...)

are run on the individual chunks returned on retrieval. In addition to calculating chunk level metrics, Phoenix also calculates MRR and NDCG for the retrieved span.

🔍
page
page
OpenInference
models
Retrieval Evals
Evals in Phoenix UI
Retrieval Evals