Use our code block below to get started using our OpenAIInstrumentor.
# Import open-telemetry dependencies
from arize.otel import register
# Setup OTel via our convenience function
tracer_provider = register(
space_id = "your-space-id", # in app space settings page
api_key = "your-api-key", # in app space settings page
project_name = "your-project-name", # name this to whatever you would like
)
# Import the automatic instrumentor from OpenInference
from openinference.instrumentation.openai import OpenAIInstrumentor
# Finish automatic instrumentation
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
The OpenAI auto-instrumentation package can be installed via npm.
The example below utilizes the OpenInference JavaScript OpenAI example.
Navigate to the backend folder.
In addition to the above package, sending traces to Arize requires the following packages: @opentelemetry/exporter-trace-otlp-grpc and @grpc/grpc-js. These package can be installed via npm by running the following command in your shell.
instrumentation.ts should be implemented as below (you'll need to install all of the packages imported below in the same manner as above):
/*instrumentation.ts */
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import {
OpenAIInstrumentation
} from "@arizeai/openinference-instrumentation-openai";
import { ConsoleSpanExporter } from "@opentelemetry/sdk-trace-base";
import {
NodeTracerProvider,
SimpleSpanProcessor,
} from "@opentelemetry/sdk-trace-node";
import { resourceFromAttributes } from "@opentelemetry/resources";
import {
OTLPTraceExporter as GrpcOTLPTraceExporter
} from "@opentelemetry/exporter-trace-otlp-grpc"; // Arize specific
import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
import { Metadata } from "@grpc/grpc-js"
// For troubleshooting, set the log level to DiagLogLevel.DEBUG
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG);
// Arize specific - Create metadata and add your headers
const metadata = new Metadata();
// Your Arize Space and API Keys, which can be found in the UI
metadata.set('space_id', 'your-space-id');
metadata.set('api_key', 'your-api-key');
const processor = new SimpleSpanProcessor(new ConsoleSpanExporter());
const otlpProcessor = new SimpleSpanProcessor(
new GrpcOTLPTraceExporter({
url: "https://otlp.arize.com/v1",
metadata,
})
);
const provider = new NodeTracerProvider({
resource: resourceFromAttributes({
// Arize specific - The name of a new or preexisting model you
// want to export spans to
"model_id": "your-model-id",
"model_version": "your-model-version"
}),
spanProcessors: [processor, otlpProcessor]
});
registerInstrumentations({
instrumentations: [new OpenAIInstrumentation({})],
});
provider.register();
If you simultaneously want to send spans to a Phoenix collector, you should also create a span processor pointing to Phoenix and add it to the list of spanProcessors in your provider.
import {
OTLPTraceExporter as ProtoOTLPTraceExporter
} from "@opentelemetry/exporter-trace-otlp-proto";
// Create another SpanProcessor to send spans to Phoenix
const phoenixProcessor = new SimpleSpanProcessor(
new ProtoOTLPTraceExporter({
// This is the url where your phoenix server is running
url: "http://localhost:6006/v1/traces",
}),
);
// Add the additional processor to the provider
const provider = new NodeTracerProvider({
resource: resourceFromAttributes({
// Arize specific - The name of a new or preexisting model you
// want to export spans to
"model_id": "your-model-id",
"model_version": "your-model-version"
}),
spanProcessors: [processor, otlpProcessor, phoenixProcessor]
});
Follow the steps from the backend and frontend readme. Or simply run:
docker compose up --build
Tracing Function Calls
Arize has first-class support for instrumenting calls and seeing both input and output messages. We support role types such as system, user, and assistant messages, as well as function calling.
We follow a standardized format for how a trace data should be structured using , which is our open source package based on . The package we are using is , which is a lightweight convenience package to set up OpenTelemetry and send traces to Arize.
Now start asking questions to your LLM app and watch the traces being collected by Arize. For more examples of instrumenting OpenAI applications, check our .
to build run the frontend, backend, and Phoenix all at the same time. Navigate to localhost:3000 to begin sending messages to the chatbot and check out your traces in Arize at or Phoenix at localhost:6006.
Arize simplifies debugging by automatically made by LLMs. With the above code, Arize will receive all function calls and allow users to review the function outputs, arguments, and any parameters used. These traces can be loaded into the prompt playground, where you can experiment with parameters, adjust function definitions, and optimize workflows.