Quickstart: LLM Tracing
Learn how to trace your LLM application and run evaluations in Arize
Last updated
Was this helpful?
Learn how to trace your LLM application and run evaluations in Arize
Last updated
Was this helpful?
To trace your LLM app and start troubleshooting your LLM calls, you'll need to do the following:
You can also dive right into examples below.
Python:
JS/TS:
Run the following commands below to install our open source tracing packages, which works on top of . This example below uses openai, and we support many LLM providers ().
Using pip
Using conda
Go to your space settings in the left navigation, and you will see your API keys on the right hand side. You'll need the space id and API keys for the next part.
Python and JS/TS examples are shown below.
The following code snippet showcases how to automatically instrument your OpenAI application.
Set OpenAI Key:
To test, let's send a chat request to OpenAI:
Now start asking questions to your LLM app and watch the traces being collected by Arize.
Once you've executed a sufficient number of queries (or chats) to your application, you can view the details on the LLM Tracing page.
Dive deeper into the following topics to keep improving your LLM application!
Arize is an OpenTelemetry collector, which means you can configure your tracer and span processor. For more OTEL configurability, .
The package we are using is , which is a lightweight convenience package to set up OpenTelemetry and send traces to Arize.
Are you coding with Javascript instead of Python? See our on auto-instrumentation or manual instrumentation with Javascript examples.
To continue with this guide, go to the to add evaluation labels to your traces!