Phoenix
TypeScript APIPython APICommunityGitHubPhoenix Cloud
  • Documentation
  • Self-Hosting
  • Cookbooks
  • Learn
  • Integrations
  • SDK and API Reference
  • Release Notes
  • 🤖Agents
    • Agent Workflow Patterns
      • AutoGen
      • CrewAI
      • Google GenAI SDK (Manual Orchestration)
      • OpenAI Agents
  • 📜Guides & FAQ
    • Frequently Asked Questions
    • Contribute to Phoenix
  • 🪜Fundamentals
    • Agents Hub
    • LLM Evals Hub
    • LLM Ops Hub
  • 🔭Tracing
    • What are Traces
    • How Tracing Works
    • FAQs: Tracing
  • 📃Prompt Engineering
    • Prompts Concepts
  • 🗄️Datasets and Experiments
    • Datasets Concepts
  • 🧠Evaluation
    • Evaluators
    • Eval Data Types
    • Evals With Explanations
    • LLM as a Judge
    • Custom Task Evaluation
  • 🔍Retrieval & Infrences
    • Retrieval with Embeddings
    • Benchmarking Retrieval
  • Retrieval Evals on Document Chunks
  • 🌌Inferences
    • Inferences Concepts
  • 📚Resources
    • Github
  • OpenInference
Powered by GitBook

Platform

  • Tracing
  • Prompts
  • Datasets and Experiments
  • Evals

Software

  • Python Client
  • TypeScript Client
  • Phoenix Evals
  • Phoenix Otel

Resources

  • Container Images
  • X
  • Blue Sky
  • Blog

Integrations

  • OpenTelemetry
  • AI Providers

© 2025 Arize AI

On this page
  • Overview
  • How to Evaluate Retrieval Systems
  • Using Phoenix Traces & Spans
  • Using Phoenix Inferences to Analyze RAG (Retrieval Augmented Generation)
  • Step 1. Identifying Clusters of Bad Responses
  • Step 2: Irrelevant Documents Being Retrieved
  • Step 3: Don't Have Any Documents Close Enough
  • Troubleshooting Tip:

Was this helpful?

  1. Retrieval & Infrences

Retrieval with Embeddings

PreviousCustom Task EvaluationNextBenchmarking Retrieval

Was this helpful?

Overview

Q&A with Retrieval at a Glance

LLM Input: User Query + retrieved document

LLM Output: Response based on query + document

Evaluation Metrics:

  1. Did the LLM answer the question correctly (correctness)

  2. For each retrieved document, is the document relevant to answer the user query?

Possibly the most common use-case for creating a LLM application is to connect an LLM to proprietary data such as enterprise documents or video transcriptions. Applications such as these often times are built on top of LLM frameworks such as or , which have first-class support for vector store retrievers. Vector Stores enable teams to connect their own data to LLMs. A common application is chatbots looking across a company's knowledge base/context to answer specific questions.

How to Evaluate Retrieval Systems

There are varying degrees of how we can evaluate retrieval systems.

Step 1: First we care if the chatbot is correctly answering the user's questions. Are there certain types of questions the chatbot gets wrong more often?

Step 2: Once we know there's an issue, then we need metrics to trace where specifically did it go wrong. Is the issue with retrieval? Are the documents that the system retrieves irrelevant?

Step 3: If retrieval is not the issue, we should check if we even have the right documents to answer the question.

Question
Metric
Pros
Cons

Is this a bad response to the answer?

User feedback or LLM Eval for Q&A

Most relevant way to measure application

Hard to trace down specifically what to fix

Is the retrieved context relevant?

LLM Eval for Relevance

Directly measures effectiveness of retrieval

Requires additional LLMs calls

Is the knowledge base missing areas of user queries?

Query density (drift) - Phoenix generated

Highlights groups of queries with large distance from context

Identifies broad topics missing from knowledge base, but not small gaps

Using Phoenix Traces & Spans

Visualize the chain of the traces and spans for a Q&A chatbot use case. You can click into specific spans.

When clicking into the retrieval span, you can see the relevance score for each document. This can surface irrelevant context.

Using Phoenix Inferences to Analyze RAG (Retrieval Augmented Generation)

Step 1. Identifying Clusters of Bad Responses

Phoenix surfaces up clusters of similar queries that have poor feedback.

Step 2: Irrelevant Documents Being Retrieved

Phoenix can help uncover when irrelevant context is being retrieved using the LLM Evals for Relevance. You can look at a cluster's aggregate relevance metric with precision @k, NDCG, MRR, etc to identify where to improve. You can also look at a single prompt/response pair and see the relevance of documents.

Step 3: Don't Have Any Documents Close Enough

Phoenix can help you identify if there is context that is missing from your knowledge base. By visualizing query density, you can understand what topics you need to add additional documentation for in order to improve your chatbots responses.

By setting the "primary" dataset as the user queries, and the "corpus" dataset as the context I have in my vector store, I can see if there are clusters of user query embeddings that have no nearby context embeddings, as seen in the example below.

Troubleshooting Tip:

Looking for code to get started? Go to our Quickstart guide for Search and Retrieval.

Found a problematic cluster you want to dig into, but don't want to manually sift through all of the prompts and responses? Ask chatGPT to help you understand the make up of the cluster. .

🔍
Langchain
llama_index
Try out the colab here