Haystack

Instrument LLM applications built with Haystack

In this example we will instrument an LLM application built using Haystack

pip install openinference-instrumentation-haystack haystack-ai arize-otel opentelemetry-sdk opentelemetry-exporter-otlp

Set up HaystackInstrumentor to trace your Haystack application and sends the traces to Phoenix at the endpoint defined below.

from openinference.instrumentation.haystack import HaystackInstrumentor
# Import open-telemetry dependencies
from arize_otel import register_otel, Endpoints

# Setup OTEL via our convenience function
register_otel(
    endpoints = Endpoints.ARIZE,
    space_id = "your-space-id", # in app space settings page
    api_key = "your-api-key", # in app space settings page
    model_id = "your-model-id", # name this to whatever you would like
)

HaystackInstrumentor().instrument()

Setup a simple Pipeline and see it instrumented

from haystack import Pipeline
from haystack.components.generators import OpenAIGenerator

# Initialize the pipeline
pipeline = Pipeline()

# Initialize the OpenAI generator component
llm = OpenAIGenerator(model="gpt-3.5-turbo")

# Add the generator component to the pipeline
pipeline.add_component("llm", llm)

# Define the question
question = "What is the location of the Hanging Gardens of Babylon?"

# Run the pipeline with the question
response = pipeline.run({"llm": {"prompt": question}})

print(response)

Last updated

Copyright © 2023 Arize AI, Inc