LangChain
Extract OpenInference inferences and traces to visualize and troubleshoot your LLM Application in Phoenix
Last updated
Extract OpenInference inferences and traces to visualize and troubleshoot your LLM Application in Phoenix
Last updated
Phoenix has first-class support for LangChain applications. This means that you can easily extract inferences and traces from your LangChain application and visualize them in Phoenix.
Traces provide telemetry data about the execution of your LLM application. They are a great way to understand the internals of your LangChain application and to troubleshoot problems related to things like retrieval and tool execution.
To extract traces from your LangChain application, you will have to add Phoenix's OpenInference Tracer to your LangChain application. A tracer is a class that automatically accumulates traces (sometimes referred to as spans
) as your application executes. The OpenInference Tracer is a tracer that is specifically designed to work with Phoenix and by default exports the traces to a locally running phoenix server.
To view traces in Phoenix, you will first have to start a Phoenix server. You can do this by running the following:
Once you have started a Phoenix server, you can start your LangChain application with the OpenInference Tracer as a callback. There are two ways of adding the `tracer` to your LangChain application - by instrumenting all your chains in one go (recommended) or by adding the tracer to as a callback to just the parts that you care about (not recommended).
We recommend that you instrument your entire LangChain application to maximize visibility. To do this, we will use the LangChainInstrumentor
to add the OpenInferenceTracer
to every chain in your application.
By adding the tracer to the callbacks of LangChain, we've created a one-way data connection between your LLM application and Phoenix. This is because by default the OpenInferenceTracer
uses an HTTPExporter
to send traces to your locally running Phoenix server! In this scenario the Phoenix server is serving as a Collector
of the spans that are exported from your LangChain application.
To view the traces in Phoenix, simply open the UI in your browser.
If you would like to save your traces to a file for later use, you can directly extract the traces from the tracer
To directly extract the traces from the tracer
, dump the traces from the tracer into a file (we recommend jsonl
for readability).
Now you can save this file for later inspection. To launch the app with the file generated above, simply pass the contents in the file above via a TraceDataset
In this way, you can use files as a means to store and communicate interesting traces that you may want to use to share with a team or to use later down the line to fine-tune an LLM or model.
For a fully working example of tracing with LangChain, checkout our colab notebook.
Phoenix supports visualizing LLM application inference data from a LangChain application. In particular you can use Phoenix's embeddings projection and clustering to troubleshoot retrieval-augmented generation. For a tutorial on how to extract embeddings and inferences from LangChain, check out the following notebook.