Guardrails
Correct undesirable LLM outputs from reaching your customers
Last updated
Was this helpful?
Correct undesirable LLM outputs from reaching your customers
Last updated
Was this helpful?
Guardrails correct undesirable outputs at run-time, ensuring real-time safety and compliance. Failed messages trigger corrective actions such as default responses, retries, or blocking outputs entirely.
Guardrails can be applied to either user input messages (e.g. jailbreak attempts) or LLM output messages (e.g. answer relevance). If a message in a LLM chat fails a Guard, then the Guard will take a corrective action, either providing a default response to the user or prompting the LLM to generate a new response.
We offer four types of Guards:
Dataset Embeddings Guard: Provided few shot examples of "bad" messages, Guard against similar inputs based on the cosine distance between embeddings.
General LLM Guard: Provide a prompt for an LLM Evaluator to classify the input as "pass" or "fail".
RAG LLM Guard: Similar to the General LLM Guard, but designed for the special case where the prompt includes additional context from a RAG application.
Few Shot LLM Guard: Provided few shot examples, an LLM Evaluator will classify the input as "pass" or "fail".
Users have the option to instantiate the Guards with off-the-shelf prompts / datasets from Arize, or customize the Guard with their own prompts / datasets. While our demo notebooks use Open AI models, any model provider can be used with a Guard.
Arize offers an off-the-shelf ArizeDatasetEmbeddings
Guard. Given any dataset of "bad" examples, this Guard will protect against similar messages in the LLM chat.
This Guard works in the following way:
The Guard computes embeddings for chunks associated with a set of few shot examples of "bad" user prompts or LLM messages (we recommend using 10 different prompts to balance performance and latency).
When the Guard is applied to a user or LLM message, the Guard computes the embedding for the input message and checks if any of the few shot "train" chunks in the dataset are close to the message in embedded space.
If the cosine distance between the input message and any of the chunks is within the user-specified threshold (default setting is 0.2), then the Guard intercepts the LLM call.
True Positives: 86.43% of 656 jailbreak prompts failed the DatasetEmbeddings guard.
False Negatives: 13.57% of 656 jailbreak prompts passed the DatasetEmbeddings guard.
False Positives: 13.95% of 2000 regular prompts failed the DatasetEmbeddings guard.
True Negatives: 86.05% of 2000 regular prompts passed the DatasetEmbeddings guard.
1.41 median latency for end-to-end LLM call on GPT-3.5.
Note that the "regular prompts" in the dataset consist of role play prompts that are designed to resemble jailbreak prompts.
(Coming soon) This is similar to the ArizeDatasetEmbeddings
Guard, but instead of chunking and embedding the dataset to compute the cosine distance to input messages, we use the dataset as few shot examples for an LLM prompt. Provided the dataset, the LLM Guard uses the prompt to evaluate whether an incoming message is similar to the dataset.
We recommend two different types of corrective actions when input does not pass the Guard, which can be passed into the Guard upon instantiation:
default response: Instantiate the Guard with on_fail="fix"
if you want the Guard to use a user-defined hard-coded default LLM response.
LLM reask: Instantiate the Guard with on_fail="reask"
to re-prompt the LLM when the Guard fails. Note that this can introduce additional latency in your application.
Additional details in a tutorial (coming soon).
In addition to real-time intervention, Arize offers tracing and visualization tools to investigate chats where the Guard was triggered.
Below we see the following information in the Arize UI for a Jailbreak attempt flagged by the DatasetEmbeddings
Guard:
Each LLM call and guard step that took place under the hood.
The error message from the Guard when it flagged the Jailbreak attempt.
The validator_result: "fail"
The validator_on_fail: "exception"
The cosine_distance: 0.15
, which is the cosine distance of the closest embedded prompt chunk in the set of few shot examples of jailbreak prompts.
The text corresponding to the most_similar_chunk
.
The text corresponding to the input_message
.
For additional support getting started with Guards, please refer to the following resources:
Refer to the and for details, which can be loaded into Colab.
By default, the ArizeDatasetEmbeddings
Guard will use few shot examples from a of jailbreak prompts. We benchmarked the performance of our model on this dataset and recorded the following results:
By comparison, the associated with the dataset explains that jailbreak prompts have a 68.5% attack success rate (ASR) on the GPT-4 model.
Refer to for implementation details.
We offer , and RAG LLM Judges as off-the-shelf Arize Guards. After instantiating the Guard, simply pass in the user_message
, retrieved context
and llm_response
to the at runtime and it will Guard against problematic messages. Each off-the-shelf Guard has been benchmarked on a public dataset (see ).
You can also customize this Guard with your own RAG LLM Judge prompt by inheriting from class.
Refer to the and for additional details.
(Coming soon) All off-the-shelf from Arize will be offered as Guards. Users can also instantiate the Guard with a custom prompt.
Refer to the for an example on how to integrate OTEL tracing with your Guard.
Users have the option to connect their guards to Arize . In the example below, we see a user create a monitor that sends an alert every time the Guard fails. These alerts can be connected to slack, PagerDuty, email, etc.
and with OTEL tracing
and with OTEL tracing
guard_fail
to count the number of times that the Guard fails.guard_fail
metric.