Guardrails AI
Instrument LLM applications using the Guardrails AI framework
This guide helps you setup instrumentation for your guardrail using Openinference. For more information on Guardrails, click here.
In this example we will instrument a small program that uses the Guardrails AI framework to protect their LLM calls.
Set up GuardrailsInstrumentor
to trace your guardrails application and sends the traces to Arize at the endpoint defined below.
To test, run the following code and observe your traces in Arize.
Last updated
Was this helpful?