OpenAI
Arize has first-class support for instrumenting OpenAI calls and seeing both input and output messages. We support role types such as system, user, and assistant messages, as well as function calling.
We follow a standardized format for how a trace data should be structured using openinference, which is our open source package based on OpenTelemetry.
Use our code block below to get started using our OpenAIInstrumentor.
import openai
import os
# Import open-telemetry dependencies
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from opentelemetry.sdk.resources import Resource
# Import the automatic instrumentor from OpenInference
from openinference.instrumentation.openai import OpenAIInstrumentor
# Set the Space and API keys as headers for authentication
headers = f"space_key={ARIZE_SPACE_KEY},api_key={ARIZE_API_KEY}"
os.environ['OTEL_EXPORTER_OTLP_TRACES_HEADERS'] = headers
# Set resource attributes for the name and version for your application
resource = Resource(
attributes={
"model_id":"openai-llm-tracing", # Set this to any name you'd like for your app
"model_version":"1.0", # Set this to a version number string
}
)
# Define the span processor as an exporter to the desired endpoint
endpoint = "https://otlp.arize.com/v1"
span_exporter = OTLPSpanExporter(endpoint=endpoint)
span_processor = SimpleSpanProcessor(span_exporter=span_exporter)
# Set the tracer provider
tracer_provider = trace_sdk.TracerProvider(resource=resource)
tracer_provider.add_span_processor(span_processor=span_processor)
trace_api.set_tracer_provider(tracer_provider=tracer_provider)
# Finish automatic instrumentation
OpenAIInstrumentor().instrument()
Now start asking questions to your LLM app and watch the traces being collected by Arize. For more examples of instrumenting OpenAI applications, check our openinferenece-instrumentation-openai examples.
The OpenAI auto-instrumentation package can be installed via npm
.
npm install @arizeai/openinference-instrumentation-openai
Library | Instrumentation | Version |
---|---|---|
OpenAI |
|
The example below utilizes the OpenInference JavaScript OpenAI example.
Navigate to the backend
folder.
In addition to the above package, sending traces to Arize requires the following packages: @opentelemetry/exporter-trace-otlp-grpc
and @grpc/grpc-js
. These package can be installed via npm by running the following command in your shell.
npm install @opentelemetry/exporter-trace-otlp-grpc @grpc/grpc-js
instrumentation.ts
should be implemented as below (you'll need to install all of the packages imported below in the same manner as above):
/*instrumentation.ts */
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import {
OpenAIInstrumentation
} from "@arizeai/openinference-instrumentation-openai";
import { ConsoleSpanExporter } from "@opentelemetry/sdk-trace-base";
import {
NodeTracerProvider,
SimpleSpanProcessor,
} from "@opentelemetry/sdk-trace-node";
import { Resource } from "@opentelemetry/resources";
import {
OTLPTraceExporter as GrpcOTLPTraceExporter
} from "@opentelemetry/exporter-trace-otlp-grpc"; // Arize specific
import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
import { Metadata } from "@grpc/grpc-js"
// For troubleshooting, set the log level to DiagLogLevel.DEBUG
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG);
// Arize specific - Create metadata and add your headers
const metadata = new Metadata();
// Your Arize Space and API Keys, which can be found in the UI
metadata.set('space_key', 'your-space-key');
metadata.set('api_key', 'your-api-key');
const provider = new NodeTracerProvider({
resource: new Resource({
// Arize specific - The name of a new or preexisting model you
// want to export spans to
"model_id": "your-model-id",
"model_version": "your-model-version"
}),
});
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
provider.addSpanProcessor(
new SimpleSpanProcessor(
new GrpcOTLPTraceExporter({
url: "https://otlp.arize.com/v1",
metadata,
}),
),
);
registerInstrumentations({
instrumentations: [new OpenAIInstrumentation({})],
});
provider.register();
If you simultaneously want to send spans to a Phoenix collector, you should also add the following code blocks, from the original instrumentation.ts
files.
import {
OTLPTraceExporter as ProtoOTLPTraceExporter
} from "@opentelemetry/exporter-trace-otlp-proto";
// add as another SpanProcessor below the previous SpanProcessor
provider.addSpanProcessor(
new SimpleSpanProcessor(
new ProtoOTLPTraceExporter({
// This is the url where your phoenix server is running
url: "http://localhost:6006/v1/traces",
}),
),
);
Follow the steps from the backend
and frontend
readme. Or simply run:
docker compose up --build
to build run the frontend, backend, and Phoenix all at the same time. Navigate to localhost:3000 to begin sending messages to the chatbot and check out your traces in Arize at app.arize.com or Phoenix at localhost:6006.
Last updated