Quickstart: Retrieval
Debug your Search and Retrieval LLM workflows
Last updated
Was this helpful?
Debug your Search and Retrieval LLM workflows
Last updated
Was this helpful?
This quickstart shows how to start logging your retrievals from your vector datastore to Phoenix and run evaluations.
Follow our tutorial in a notebook with our Langchain and LlamaIndex integrations
LangChain
LlamaIndex
The first thing we need is to collect some sample from your vector store, to be able to compare against later. This is to able to see if some sections are not being retrieved, or some sections are getting a lot of traffic where you might want to beef up your context or documents in that area.
1
Voyager 2 is a spacecraft used by NASA to expl...
[-0.02785328, -0.04709944, 0.042922903, 0.0559...
We also will be logging the prompt/response pairs from the deployed application.
who was the first person that walked on the moon
[-0.0126, 0.0039, 0.0217, ...
[7395, 567965, 323794, ...
[11.30, 7.67, 5.85, ...
Neil Armstrong
In order to run retrieval Evals the following code can be used for quick analysis of common frameworks of LangChain and LlamaIndex.
Once the data is in a dataframe, evaluations can be run on the data. Evaluations can be run on on different spans of data. In the below example we run on the top level spans that represent a single trace.
The Evals are available in dataframe locally and can be materilazed back to the Phoenix UI, the Evals are attached to the referenced SpanIDs.
The snipit of code above links the Evals back to the spans they were generated against.
The calculation is done using the LLM Eval on all chunks returned for the span and the log_evaluations connects the Evals back to the original spans.
Retrieval Analyzer w/ Embeddings
Traces and Spans
Retrieval Analyzer w/ Embeddings
Traces and Spans
For more details, visit this .
For more details, visit this .
Independent of the framework you are instrumenting, Phoenix traces allow you to get retrieval data in a common dataframe format that follows the specification.
This example shows how to run Q&A and Hallucnation Evals with OpenAI (many other are available including Anthropic, Mixtral/Mistral, Gemini, OpenAI Azure, Bedrock, etc...)
are run on the individual chunks returned on retrieval. In addition to calculating chunk level metrics, Phoenix also calculates MRR and NDCG for the retrieved span.