Quickstart: Tracing
Overview
Tracing is a powerful tool for understanding the behavior of your LLM application. Phoenix has best-in-class tracing, regardless of what framework you use, and has first-class instrumentation for a variety of frameworks (LlamaIndex, LangChain, DSPy), SDKs (OpenAI, Bedrock, Mistral, Vertex), and Languages (Python, Javascript). You can also manually instrument your application using the OpenTelemetry SDK.
This example will walk you through how to use Phoenix to trace OpenAI requests.
Install Dependencies
Let's start by installing the necessary dependencies.
Launch Phoenix
You have a few options for how to start a Phoenix app. We're using a cloud instance for this tutorial, but you can launch Phoenix in multiple different ways. If you don't want to sign up for a cloud instance, you can start a Phoenix app in your notebook environment or via docker.
Now that we have Phoenix configured, we can register that instance with OpenTelemetry, which will allow us to collect traces from our application here.
Instrument your application
Now we need to indicate which methods and attributes we want to trace. Phoenix has a number of built-in tracers for popular frameworks, and provides tools to manually instrument your application if needed. See here for a list of integrations
Here we're using OpenAI, so we'll the built-in OpenAI instrumentor we provide.
Use OpenAI as normal
From here we can use OpenAI as normal. All of our requests will be traced and reported to Phoenix automatically.
Last updated