Quickstart: LLM

Learn how to trace your LLM application and run evaluations in Arize

Tracing

To trace your LLM app and start troubleshooting your LLM calls, you'll need to do the following:

Install our tracing packages

Run the following commands below to install our open source tracing packages.

pip install opentelemetry openinference

Get your API keys

Go to your space settings in the left navigation, and you will see your API keys on the right hand side. You'll need the space key and API keys for the next part.

Add our tracing code

Are you coding with Javascript instead of Python? See our detailed guide on auto-instrumentation or manual instrumentation with Javascript examples.

Install the proper packages for OpenAI

pip install openai

The following code snippet showcases how to automatically instrument your OpenAI application.

import openai
import os

# Import open-telemetry dependencies
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from opentelemetry.sdk.resources import Resource

# Import the automatic instrumentor from OpenInference
from openinference.instrumentation.openai import OpenAIInstrumentor

# Set the Space and API keys as headers for authentication
headers = f"space_key={ARIZE_SPACE_KEY},api_key={ARIZE_API_KEY}"
os.environ['OTEL_EXPORTER_OTLP_TRACES_HEADERS'] = headers

# Set resource attributes for the name and version for your application
resource = Resource(
    attributes={
        "model_id":"quickstart-llm-tutorial", # Set this to any name you'd like for your app
    }
)

# Define the span processor as an exporter to the desired endpoint
endpoint = "https://otlp.arize.com/v1"
span_exporter = OTLPSpanExporter(endpoint=endpoint)
span_processor = SimpleSpanProcessor(span_exporter=span_exporter)

# Set the tracer provider
tracer_provider = trace_sdk.TracerProvider(resource=resource)
tracer_provider.add_span_processor(span_processor=span_processor)
trace_api.set_tracer_provider(tracer_provider=tracer_provider)

# Finish automatic instrumentation
OpenAIInstrumentor().instrument()

To test, let's send a chat request to OpenAI:

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Write a haiku."}],
    max_tokens=20,
)
print(response.choices[0].message.content)

Now start asking questions to your LLM app and watch the traces being collected by Arize. For more examples of instrumenting OpenAI applications, check our openinferenece-instrumentation-openai examples.

Run your LLM application

Once you've executed a sufficient number of queries (or chats) to your application, you can view the details on the LLM Tracing page.

Evaluation

Install the Arize SDK

pip install -q 'arize[LLM_Evaluation]'

Import your spans in code

Once you have traces in Arize, you can visit the LLM Tracing tab to see your traces and export them in code. By clicking the export button, you can get the boilerplate code to copy paste to your evaluator.

# this will be prefilled by the export command. 
# Note: This uses a different API Key than the one above.
ARIZE_API_KEY = ''

# import statements required for getting your spans
import os
os.environ['ARIZE_API_KEY'] = ARIZE_API_KEY
from datetime import datetime
from arize.exporter import ArizeExportClient 
from arize.utils.types import Environments

# Exporting your dataset into a dataframe
client = ArizeExportClient()
primary_df = client.export_model_to_df(
    space_id='', # this will be prefilled by export
    model_id='', # this will be prefilled by export
    environment=Environments.TRACING, 
    start_time=datetime.fromisoformat(''), # this will be prefilled by export 
    end_time=datetime.fromisoformat(''), # this will be prefilled by export
)

Run a custom evaluator using Phoenix

Import the functions from our Phoenix library to run a custom evaluation using OpenAI.

import os
from phoenix.evals import OpenAIModel, llm_classify

Ensure you have your OpenAI API keys setup correctly for your OpenAI model.

api_key = os.environ.get("OPENAI_API_KEY")
eval_model = OpenAIModel(
    model="gpt-4-turbo-preview", temperature=0, api_key=api_key
)

Create a prompt template for the LLM to judge the quality of your responses. Below is an example which judges the positivity or negativity of the LLM output.

MY_CUSTOM_TEMPLATE = '''
    You are evaluating the positivity or negativity of the responses to questions.
    [BEGIN DATA]
    ************
    [Question]: {input}
    ************
    [Response]: {output}
    [END DATA]


    Please focus on the tone of the response.
    Your answer must be single word, either "positive" or "negative"
    '''

Notice the variables in brackets for {input} and {output} above. You will need to set those variables appropriately for the dataframe so you can run your custom template. We use OpenInference as a set of conventions (complementary to OpenTelemetry) to trace AI applications. This means depending on the provider you are using, the attributes of the trace will be different.

You can use the code below to check which attributes are in the traces in your dataframe.

primary_df.columns

Use the code below to set the input and output variables needed for the prompt above.

primary_df["input"] = primary_df["attributes.llm.input_messages.contents"]
primary_df["output"] = primary_df["attributes.llm.output_messages.contents"]

Use the llm_classify function to run the evaluation using your custom template. You will be using the dataframe from the traces you generated above.

evals_df = llm_classify(
    dataframe=primary_df,
    template = MY_CUSTOM_TEMPLATE,
    model=eval_model,
    rails=["positive", "negative"]
)

If you'd like more information, see our detailed guide on custom evaluators. You can also use our pre-tested evaluators for evaluating hallucination, toxicity, retrieval, etc.

Log evaluations back to Arize

Export the evals you generated above to Arize using the log_evaluations function as part of our Python SDK. See more information on how to do this in our article on custom evaluators.

Currently, our evaluations are logged within Arize every 24 hours, and we're working on making them as close to instant as possible! Reach out to support@arize.com if you're having trouble here.

import os
from arize.pandas.logger import Client

API_KEY = os.environ.get("ARIZE_API_KEY")
SPACE_KEY = os.environ.get("ARIZE_SPACE_KEY")

# Initialize Arize client using the model_id and version you used previously
arize_client = Client(space_key=SPACE_KEY, api_key=API_KEY)
model_id = "quickstart-llm-tutorial"

# Set the evals_df to have the correct span ID to log it to Arize
evals_df = evals_df.set_index(primary_df["context.span_id"])

# Use Arize client to log evaluations
response = arize_client.log_evaluations(
    dataframe=evals_df,
    model_id=model_id,
)

# If successful, the server will return a status_code of 200
if response.status_code != 200:
    print(f"❌ logging failed with response code {response.status_code}, {response.text}")
else:
    print(f"✅ You have successfully logged evaluations to Arize")

Next steps

Dive deeper into the following topics to keep improving your LLM application!

Last updated

Copyright © 2023 Arize AI, Inc