OpenAI
Instrument calls to the OpenAI Python Library
The OpenAI Python Library implements Python bindings for OpenAI's popular suite of models. Phoenix provides utilities to instrument calls to OpenAI's API, enabling deep observability into the behavior of an LLM application build on top on these models.
Traces
OpenInference Traces collect telemetry data about the execution of your LLM application. Consider using this instrumentation to understand how a OpenAI model is being called inside a complex system and to troubleshoot issues such as extraction and response synthesis. These traces can also help debug operational issues such as rate limits, authentication issues or improperly set model parameters.
Phoenix currently supports calls to the ChatCompletion
interface, but more are planned soon.
Have a OpenAI API you would like to see instrumented? Drop us a GitHub issue!
To view OpenInference traces in Phoenix, you will first have to start a Phoenix server. You can do this by running the following:
Once you have started a Phoenix server, you can instrument the openai
Python library using the OpenAIInstrumentor
class.
All subsequent calls to the ChatCompletion
interface will now report informational spans to Phoenix. These traces and spans are viewable within the Phoenix UI.
Saving Traces
If you would like to save your traces to a file for later use, you can directly extract the traces from the tracer
To directly extract the traces from the tracer
, dump the traces from the tracer into a file (we recommend jsonl
for readability).
Now you can save this file for later inspection. To launch the app with the file generated above, simply pass the contents in the file above via a TraceDataset
In this way, you can use files as a means to store and communicate interesting traces that you may want to use to share with a team or to use later down the line to fine-tune an LLM or model.
Last updated