VertexAI

Instrument LLM calls made using VertexAI's SDK via the VertexAIInstrumentor

The VertexAI SDK can be instrumented using the openinference-instrumentation-vertexai package.

Launch Phoenix

Sign up for Phoenix:

Sign up for an Arize Phoenix account at https://app.phoenix.arize.com/login

Install packages:

pip install arize-phoenix-otel

Connect your application to your cloud instance:

import os
from phoenix.otel import register

# Add Phoenix API Key for tracing
PHOENIX_API_KEY = "ADD YOUR API KEY"
os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={PHOENIX_API_KEY}"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "https://app.phoenix.arize.com"

# configure the Phoenix tracer
tracer_provider = register(
  project_name="my-llm-app", # Default is 'default'
) 

Your Phoenix API key can be found on the Keys section of your dashboard.

Install

pip install openinference-instrumentation-vertexai vertexai

Setup

See Google's guide on setting up your environment for the Google Cloud AI Platform. You can also store your Project ID in the CLOUD_ML_PROJECT_ID environment variable.

Initialize the VertexAIInstrumentor before your application code.

from openinference.instrumentation.vertexai import VertexAIInstrumentor

VertexAIInstrumentor().instrument(tracer_provider=tracer_provider)

Run VertexAI

import vertexai
from vertexai.generative_models import GenerativeModel

vertexai.init(location="us-central1")
model = GenerativeModel("gemini-1.5-flash")

print(model.generate_content("Why is sky blue?").text)

Observe

Now that you have tracing setup, all invocations of Vertex models will be streamed to your running Phoenix for observability and evaluation.

Resources

Last updated

Was this helpful?