Guardrails AI

Instrument LLM applications using the Guardrails AI framework

Setup Guardrails AI with a DatasetEmbeddingsGuard

This guide helps you setup instrumentation for your guardrail using Openinference. For more information on Guardrails, click here.

In this example we will instrument a small program that uses the Guardrails AI framework to protect their LLM calls.

pip install openinference-instrumentation-guardrails openinference-instrumentation-guardrails guardrails-ai arize-otel opentelemetry-sdk opentelemetry-exporter-otlp

Set up GuardrailsInstrumentor to trace your guardrails application and sends the traces to Phoenix at the endpoint defined below.

from openinference.instrumentation.guardrails import GuardrailsInstrumentor
# Import open-telemetry dependencies
from arize.otel import register

# Setup OTel via our convenience function
tracer_provider = register(
    space_id = "your-space-id", # in app space settings page
    api_key = "your-api-key", # in app space settings page
    project_name = "your-project-name", # name this to whatever you would like
)
GuardrailsInstrumentor().instrument(tracer_provider=tracer_provider)

To test, run the following code and observe your traces in Arize.

from guardrails import Guard
from guardrails.hub import TwoWords
import openai

guard = Guard().use(
    TwoWords(),
)

response = guard(
    llm_api=openai.chat.completions.create,
    prompt="What is another name for America?",
    model="gpt-3.5-turbo",
    max_tokens=1024,
)

print(response)

Last updated

Copyright © 2023 Arize AI, Inc