Phoenix
TypeScript APIPython APICommunityGitHubPhoenix Cloud
  • Documentation
  • Self-Hosting
  • Cookbooks
  • Learn
  • Integrations
  • SDK and API Reference
  • Release Notes
  • Featured Tutorials
  • Agent Cookbooks
  • Agent Demos
  • 🔭Tracing
  • Cookbooks
  • Structured Data Extraction
  • 🗒️Prompt Engineering
    • Few Shot Prompting
    • ReAct Prompting
    • Chain-of-Thought Prompting
    • Prompt Optimization
    • LLM as a Judge Prompt Optimization
  • 🗄️Datasets & Experiments
    • Cookbooks
    • Summarization
    • Text2SQL
  • 🧠Evaluation
    • Cookbooks
    • Evaluate RAG
    • Evaluate an Agent
    • OpenAI Agents SDK Cookbook
  • 🔍Retrieval & Inferences
    • Cookbooks
    • Embeddings Analysis
Powered by GitBook

Platform

  • Tracing
  • Prompts
  • Datasets and Experiments
  • Evals

Software

  • Python Client
  • TypeScript Client
  • Phoenix Evals
  • Phoenix Otel

Resources

  • Container Images
  • X
  • Blue Sky
  • Blog

Integrations

  • OpenTelemetry
  • AI Providers

© 2025 Arize AI

On this page
  • Embedding Details
  • Embedding Drift Over Time
  • Clusters
  • UMAP Point-Cloud

Was this helpful?

  1. Retrieval & Inferences

Embeddings Analysis

PreviousCookbooks

Last updated 1 month ago

Was this helpful?

Embedding Details

For each described in the inference (s), Phoenix serves a embeddings troubleshooting view to help you identify areas of drift and performance degradation. Let's start with embedding drift.

Embedding Drift Over Time

The picture below shows a time series graph of the drift between two groups of vectors –- the primary (typically production) vectors and reference / baseline vectors. Phoenix uses euclidean distance as the primary measure of embedding drift and helps us identify times where your inference set is diverging from a given reference baseline.

Moments of high euclidean distance is an indication that the primary inference set is starting to drift from the reference inference set. As the primary inferences move further away from the reference (both in angle and in magnitude), the euclidean distance increases as well. For this reason times of high euclidean distance are a good starting point for trying to identify new anomalies and areas of drift.

In Phoenix, you can views the drift of a particular embedding in a time series graph at the top of the page. To diagnose the cause of the drift, click on the graph at different times to view a breakdown of the embeddings at particular time.

Clusters

When twos are used to initialize phoenix, the clusters are automatically ordered by drift. This means that clusters that are suffering from the highest amount of under-sampling (more in the primary inferences than the reference) are bubbled to the top. You can click on these clusters to view the details of the points contained in each cluster.

UMAP Point-Cloud

In addition to the point-cloud, another dimension we have at our disposal is color (and in some cases shape). Out of the box phoenix let's you assign colors to the UMAP point-cloud by dimension (features, tags, predictions, actuals), performance (correctness which distinguishes true positives and true negatives from the incorrect predictions), and inference (to highlight areas of drift). This helps you explore your point-cloud from different perspectives depending on what you are looking for.

Note that when you are troubleshooting search and retrieval using inferences, the euclidean distance of your queries to your knowledge base vectors is presented as query distance.

Euclidean distance over time
Centroids of the two inferences are used to calculate euclidean and cosine distance

For an in-depth guide of euclidean distance and embedding drift, check out

Click on a particular time to view why the inference embeddings are drifting

Phoenix automatically breaks up your embeddings into groups of inferences using a clustering algorithm called . This is particularly useful if you are trying to identify areas of your embeddings that are drifting or performing badly.

Phoenix projects the embeddings you provided into lower dimensional space (3 dimensions) using a dimension reduction algorithm called (stands for Uniform Manifold Approximation and Projection). This lets us understand how your in a visually understandable way.

Color by inferences vs color by correctness vs color by prediction for a computer vision model
🔍
corpus
Arze's ML course
HDBSCAN
UMAP
embeddings have encoded semantic meaning
embedding
schema