Configure OTEL Tracer

When the register function does not offer enough customization for your needs, you can use the opentelemetry_sdk to control how you send traces.

This snippet contains a few OTel concepts:

  • The header is an environment variable for authentication to send data.

  • A resource represents an origin (e.g., a particular service, or in this case, a project) from which your spans are emitted.

  • Span processors filter, batch, and perform operations on your spans prior to export. You can set multiple locations for exporting data, such as the console.

  • Your tracer provides a handle for you to create spans and add attributes in your application code.

  • The collector (e.g., Phoenix) receives the spans exported by your application.

The SimpleSpanProcessor is synchronous and blocking. Use the BatchSpanProcessor for non-blocking production application instrumentation.

Here is sample code to setup instrumentation using OpenTelemetry libraries before starting the OpenAI auto instrumentor from openinference.

Install the libraries

pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-grpc openinference-semantic-conventions

Customize your tracing below

# Import open-telemetry dependencies
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, BatchSpanProcessor
from opentelemetry.sdk.resources import Resource

# Import the automatic instrumentor from OpenInference
from openinference.instrumentation.openai import OpenAIInstrumentor

# Set the Space and API keys as headers for authentication
headers = f"space_id={ARIZE_SPACE_ID},api_key={ARIZE_API_KEY}"
os.environ['OTEL_EXPORTER_OTLP_TRACES_HEADERS'] = headers

# Set resource attributes for the name and version for your application
trace_attributes = {
  "model_id": "your model name", # This is how your model will show up in Arize
  "model_version": "v1", # You can filter your spans by model version in Arize
}

# Define the desired endpoint URL to send traces
endpoint = "https://otlp.arize.com/v1"

# Set the tracer provider
tracer_provider = trace_sdk.TracerProvider(
  resource=Resource(attributes=trace_attributes)
)
tracer_provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter(endpoint)))
tracer_provider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter()))
trace_api.set_tracer_provider(tracer_provider=tracer_provider)

# To get your tracer
tracer = trace_api.get_tracer(__name__)

# Finish automatic instrumentation
OpenAIInstrumentor().instrument()

Now start asking questions to your LLM app and watch the traces being collected by Arize. For more examples of editing your OTEL tracer, check our openinferenece-instrumentation-openai examples.

Last updated

Copyright © 2023 Arize AI, Inc