Quickstart: LLM

Learn how to trace your LLM application and run evaluations in Arize

To trace your LLM app and start troubleshooting your LLM calls, you'll need to do the following:

You can also dive right into examples below.

Install our tracing packages

Run the following commands below to install our open source tracing packages, which works on top of OpenTelemetry. This example below uses openai, and we support many LLM providers (see full list).

Using pip

pip install arize-otel openai openinference-instrumentation-openai opentelemetry-sdk opentelemetry-exporter-otlp

Using conda

conda install -c conda-forge openai openinference-instrumentation-openai opentelemetry-sdk opentelemetry-exporter-otlp

Get your API keys

Go to your space settings in the left navigation, and you will see your API keys on the right hand side. You'll need the space id and API keys for the next part.

Where to find your API Keys

Add our tracing code

Arize is an OpenTelemetry collector, which means you can configure your tracer and span processor. For more OTEL configurability, see how to set your tracer for auto instrumentors.

Python and JS/TS examples are shown below

Are you coding with Javascript instead of Python? See our detailed guide on auto-instrumentation or manual instrumentation with Javascript examples.

The following code snippet showcases how to automatically instrument your OpenAI application.

# Import open-telemetry dependencies
from arize_otel import register_otel, Endpoints

# Setup OTEL via our convenience function
register_otel(
    endpoints = Endpoints.ARIZE,
    space_id = "your-space-id", # in app space settings page
    api_key = "your-api-key", # in app space settings page
    model_id = "your-model-id", # name this to whatever you would like
)
# Import the automatic instrumentor from OpenInference
from openinference.instrumentation.openai import OpenAIInstrumentor

# Finish automatic instrumentation
OpenAIInstrumentor().instrument()

Set OpenAI Key:

import os
from getpass import getpass
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key")

To test, let's send a chat request to OpenAI:

import openai

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Write a haiku."}],
    max_tokens=20,
)
print(response.choices[0].message.content)

Now start asking questions to your LLM app and watch the traces being collected by Arize. You can also follow our colab guide here.

Run your LLM application

Once you've executed a sufficient number of queries (or chats) to your application, you can view the details on the LLM Tracing page.

A detailed view of a trace of a RAG application using LlamaIndex

To continue with this guide, go to the quickstart: evaluation to add evaluation labels to your traces!

Next steps

Dive deeper into the following topics to keep improving your LLM application!

Last updated

Copyright © 2023 Arize AI, Inc