LiteLLM

LiteLLM allows developers to call all LLM APIs using the openAI format. LiteLLM Proxy is a proxy server to call 100+ LLMs in OpenAI format. Both are supported by this auto-instrumentation.

Any calls made to the following functions will be automatically captured by this integration:

  • completion()

  • acompletion()

  • completion_with_retries()

  • embedding()

  • aembedding()

  • image_generation()

  • aimage_generation()

Launch Phoenix

Sign up for Phoenix:

Sign up for an Arize Phoenix account at https://app.phoenix.arize.com/login

Install packages:

pip install arize-phoenix-otel

Connect your application to your cloud instance:

import os
from phoenix.otel import register

# Add Phoenix API Key for tracing
PHOENIX_API_KEY = "ADD YOUR API KEY"
os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={PHOENIX_API_KEY}"

# configure the Phoenix tracer
tracer_provider = register(
  project_name="my-llm-app", # Default is 'default'
  endpoint="https://app.phoenix.arize.com/v1/traces",
)

Your Phoenix API key can be found on the Keys section of your dashboard.

Install

pip install openinference-instrumentation-litellm litellm

Setup

Initialize the InstructorInstrumentor before your application code.

from openinference.instrumentation.litellm import LiteLLMInstrumentor

LiteLLMInstrumentor().instrument(tracer_provider=tracer_provider)

Add any API keys needed by the models you are using with LiteLLM.

import os
os.environ["OPENAI_API_KEY"] = "PASTE_YOUR_API_KEY_HERE"

Run LiteLLM

You can now use LiteLLM as normal and calls will be traces in Phoenix.

import litellm
completion_response = litellm.completion(model="gpt-3.5-turbo",
                   messages=[{"content": "What's the capital of China?", "role": "user"}])
print(completion_response)

Observe

Traces should now be visible in Phoenix!

Resources

Last updated