Arize Eval Templates
Overview
We have built simple functions for using our eval prompt templates. These prompts are tested against benchmarked datasets and target precision at 70-90% and F1 at 70-85%. We use Phoenix, our open-source package to run evaluations.
To use our evaluators, follow these steps below.
Choose an evaluator LLM Classify
The LLM Classify allows you to run LLM as a Judge. Arize has open sourced the code for LLM as a Judge in Phoenix.
Hallucination
Evaluator
input, reference, output
factual, hallucinated
Evaluates whether an output
contains information not available in the reference
text given an input
query.
QA
Evaluator
input, reference, output
correct, incorrect
Evaluates whether an output
fully answers a question correctly given an input
query and reference
documents.
Relevance
Evaluator
input, reference
relevant, unrelated
Evaluates whether a reference
document is relevant or irrelevant to the corresponding input
.
Toxicity
Evaluator
input
toxic, non-toxic
Evaluates whether an input
string contains racist, sexist, chauvinistic, biased, or otherwise toxic content.
Summarization
Evaluator
input, output
good, bad
Evaluates whether an output
summary provides an accurate synopsis of an input
document.
Code Generation
query, code
readable, unreadable
Evaluates whether code correctly implements the query
Toxicity
text
toxic, non-toxic
Evaluates whether text is toxic
Human Vs AI
question, correct_answer, ai_generated_answer
correct, incorrect
Compares human text vs generated text
Citation Evals
conversation, document_text
correct, incorrect
Check if the citation correctly answers the question by looking at the text on the cited page & conversation
User Frustration
conversation
frustrated, ok
Check if the user is frustrated in the conversation
SQL Generation
question, query_gen, response
correct, incorrect
Check if SQL Generation is correct based on the question
Tool Calling Eval
question, tool_call
correect, incorrect
Check if tool calling function calls and extracted params are correct.
Have Copilot Choose an Evaluator
If you are unsure where eval to choose,✨Copilot can choose for you. Navigate to the main chat in the UI and ask Copilot to suggest a Phoenix eval for your application.
Using LLM_Classify
The LLM Classify uses a library to classify and generate Evals. Arize uses the Arize Phoenix open source library to run phoenix LLM as a Judge.
Setup the evaluation library
All of our evaluators are easily imported with the phoenix library, which you can install using this command below.
Import the pre-tested evaluators along with the helper functions using this code snippet.
Next, you need to setup the evaluators to use a specific large language model provider. This example uses OpenAIModel
, but you can use any of our supported evaluation models. In this example, we will use the hallucination evaluator and the QA correctness evaluator.
Prepare your data
Our evaluation functions require dataframes to be passed with specific column names. You can construct these dataframes manually or you can manipulate the dataframes you retrieve from traces in Arize or traces in Phoenix.
For this example, we will create the dataframe from scratch to include the required columns we need -- input
, reference
, and output
.
Run the eval
Then you can use the llm_classify
function to run the evals on your dataframe.
Now you have the results of your hallucination eval and QA correctness eval! See the results below when you print your results (data not included).
hallucination_eval_df
factual
The query asks for the capital of California. The reference text directly states that "Sacramento is the capital of California." The answer provided, "Sacramento," directly matches the information given in the reference text. Therefore, the answer is based on the information provided in the reference text and does not contain any false information or assumptions not present in the reference text. This means the answer is factual and not a hallucination.
hallucinated
The query asks for the capital of California, but the reference text provides information about the capital of Nevada, which is Carson City. The answer given, "Carson City," is incorrect for the query since Carson City is not the capital of California; Sacramento is. Therefore, the answer is not based on the reference text in relation to the query asked. It incorrectly assumes Carson City is the capital of California, which is a factual error and not supported by the reference text. Thus, the answer is a hallucination of facts because it provides information that is factually incorrect and not supported by the reference text.
qa_correctness_eval_df
correct
The question asks for the capital of California. The reference text directly states that Sacramento is the capital of California. Therefore, the given answer, "Sacramento", directly matches the information provided in the reference text, accurately answering the question posed. There is no discrepancy between the question, the reference text, and the given answer, making the answer correct.
incorrect
The question asks for the capital of California, but the answer provided is "Carson City," which the reference text correctly identifies as the capital of Nevada, not California. Therefore, the answer does not correctly answer the question about California's capital.'
If you'd like, you can log those evaluations back to Arize to save the results.
Supported Models
The models are instantiated and usable in the LLM Eval function. The models are also directly callable with strings.
We currently support a growing set of models for LLM Evals, please check out the API section for usage.
GPT-4
✔
GPT-4o
✔
GPT-4o Mini
✔
GPT-3.5 Turbo
✔
GPT-3.5 Instruct
✔
Claude 3.5 Sonnet
✔
Claude 3.5 Opus
✔
Claude 3.5 Haiku
✔
Gemini 1.5 Pro
✔
Gemin 1.5 Flash
✔
Gemini 1.0 Pro
✔
Llama 3.1 405B/70B/8B
✔
Azure Hosted Open AI
✔
Palm 2 Vertex
✔
AWS Bedrock
✔
Litellm
✔
Huggingface Llama7B
(use litellm)
Anthropic
✔
Cohere
(use litellm)
🎓 Learn more about the concept of LLM as a judge.
Last updated