Haystack

Instrument LLM applications built with Haystack

In this example we will instrument an LLM application built using Haystack

pip install openinference-instrumentation-haystack haystack-ai arize-otel opentelemetry-sdk opentelemetry-exporter-otlp

Set up HaystackInstrumentor to trace your Haystack application and sends the traces to Phoenix at the endpoint defined below.

# Import open-telemetry dependencies
from arize.otel import register

# Setup OTel via our convenience function
tracer_provider = register(
    space_id = "your-space-id", # in app space settings page
    api_key = "your-api-key", # in app space settings page
    project_name = "your-project-name", # name this to whatever you would like
)

# Import openinference instrumentor
from openinference.instrumentation.haystack import HaystackInstrumentor

# Turn on the instrumentor
HaystackInstrumentor().instrument(tracer_provider=tracer_provider)

Setup a simple Pipeline and see it instrumented

from haystack import Pipeline
from haystack.components.generators import OpenAIGenerator

# Initialize the pipeline
pipeline = Pipeline()

# Initialize the OpenAI generator component
llm = OpenAIGenerator(model="gpt-3.5-turbo")

# Add the generator component to the pipeline
pipeline.add_component("llm", llm)

# Define the question
question = "What is the location of the Hanging Gardens of Babylon?"

# Run the pipeline with the question
response = pipeline.run({"llm": {"prompt": question}})

print(response)

Last updated

Copyright © 2023 Arize AI, Inc