Arize Evaluators

Overview

We have built simple functions for using our eval prompt templates. These prompts are tested against benchmarked datasets and target precision at 70-90% and F1 at 70-85%. We use Phoenix, our open-source package to run evaluations.

To use our evaluators, follow these steps below.

Choose an evaluator

We offer five evaluators out of the box and working on adding more! The prompt template has a few required columns that you will need to specify in the dataframe you pass into the evaluator. See the API reference for more details.

You can view the exact prompt templates here, and their benchmarks in our Phoenix docs.

EvaluatorRequired ColumnsOutput LabelsUse

Hallucination

Evaluator

input, reference, output

factual, hallucinated

Evaluates whether an output contains information not available in the reference text given an input query.

QA

Evaluator

input, reference, output

correct, incorrect

Evaluates whether an output fully answers a question correctly given an input query and reference documents.

Relevance

Evaluator

input, reference

relevant, unrelated

Evaluates whether a reference document is relevant or irrelevant to the corresponding input.

Toxicity

Evaluator

input

toxic, non-toxic

Evaluates whether an input string contains racist, sexist, chauvinistic, biased, or otherwise toxic content.

Summarization

Evaluator

input, output

good, bad

Evaluates whether an output summary provides an accurate synopsis of an input document.

Setup the evaluation library

All of our evaluators are easily imported with the phoenix library, which you can install using this command below.

pip install "arize-phoenix[evals]"

Import the pre-tested evaluators along with the helper functions using this code snippet.

from phoenix.evals import (
    HallucinationEvaluator,
    QAEvaluator,
    RelevanceEvaluator,
    ToxicityEvaluator,
    SummarizationEvaluator,
    run_evals,
    OpenAIModel
)

Next, you need to setup the evaluators to use a specific large language model provider. This example uses OpenAIModel, but you can use any of our supported evaluation models. In this example, we will use the hallucination evaluator and the QA correctness evaluator.

api_key = None  # set your api key here or with the OPENAI_API_KEY environment variable
eval_model = OpenAIModel(model="gpt-4-turbo-preview", api_key=api_key)

hallucination_evaluator = HallucinationEvaluator(eval_model)
qa_correctness_evaluator = QAEvaluator(eval_model)

Prepare your data

Our evaluation functions require dataframes to be passed with specific column names. You can construct these dataframes manually or you can manipulate the dataframes you retrieve from traces in Arize or traces in Phoenix.

For this example, we will create the dataframe from scratch to include the required columns we need -- input, reference, and output.

import pandas as pd
dataframe = pd.DataFrame(
    [
        {
            "input": "What is the capital of California?",
            "reference": "Sacramento is the capital of California.",
            "output": "Sacramento",
        },
        {
            "input": "What is the capital of California?",
            "reference": "Carson City is the Capital of Nevada.",
            "output": "Carson City",
        },
    ]
)

Run the eval

Then you can use the run_evals function to run the evals on your dataframe.

hallucination_eval_df, qa_correctness_eval_df = run_evals(
    dataframe=dataframe,
    evaluators=[hallucination_evaluator, qa_correctness_evaluator],
    provide_explanation=True,
)

Now you have the results of your hallucination eval and QA correctness eval! See the results below when you print your results (data not included).

hallucination_eval_df

data (not included)labelexplanation
"input": "What is the capital of California?"

"reference": "Sacramento is the capital of California."

"output": "Sacramento"

factual

The query asks for the capital of California. The reference text directly states that "Sacramento is the capital of California." The answer provided, "Sacramento," directly matches the information given in the reference text. Therefore, the answer is based on the information provided in the reference text and does not contain any false information or assumptions not present in the reference text. This means the answer is factual and not a hallucination.

"input": "What is the capital of California?",
            "reference": "Carson City is the Capital of Nevada."
            "output": "Carson City"

hallucinated

The query asks for the capital of California, but the reference text provides information about the capital of Nevada, which is Carson City. The answer given, "Carson City," is incorrect for the query since Carson City is not the capital of California; Sacramento is. Therefore, the answer is not based on the reference text in relation to the query asked. It incorrectly assumes Carson City is the capital of California, which is a factual error and not supported by the reference text. Thus, the answer is a hallucination of facts because it provides information that is factually incorrect and not supported by the reference text.

qa_correctness_eval_df

data (not included)labelexplanation
"input": "What is the capital of California?"

"reference": "Sacramento is the capital of California."

"output": "Sacramento"

correct

The question asks for the capital of California. The reference text directly states that Sacramento is the capital of California. Therefore, the given answer, "Sacramento", directly matches the information provided in the reference text, accurately answering the question posed. There is no discrepancy between the question, the reference text, and the given answer, making the answer correct.

"input": "What is the capital of California?",

"reference": "Carson City is the Capital of Nevada."

"output": "Carson City"

incorrect

The question asks for the capital of California, but the answer provided is "Carson City," which the reference text correctly identifies as the capital of Nevada, not California. Therefore, the answer does not correctly answer the question about California's capital.'

If you'd like, you can log those evaluations back to Arize to save the results.

Supported Models

The models are instantiated and usable in the LLM Eval function. The models are also directly callable with strings.

model = OpenAIModel(model_name="gpt-4",temperature=0.6)
model("What is the largest costal city in France?")

We currently support a growing set of models for LLM Evals, please check out the API section for usage.

ModelSupport

GPT-4

GPT-3.5 Turbo

GPT-3.5 Instruct

Azure Hosted Open AI

Palm 2 Vertex

AWS Bedrock

Litellm

Huggingface Llama7B

(use litellm)

Anthropic

Cohere

(use litellm)

Last updated

Copyright © 2023 Arize AI, Inc