Using Arize Evaluators

Overview

We have built simple functions for using our eval prompt templates. These prompts are tested against benchmarked datasets and target precision at 70-90% and F1 at 70-85%. We use Phoenix, our open-source package to run evaluations.

To use our evaluators, follow these steps below.

Choose an evaluator LLM Classify

The LLM Classify allows you to run LLM as a Judge. Arize has open sourced the code for LLM as a Judge in Phoenix.

EvaluatorRequired ColumnsOutput LabelsUse

Hallucination

Evaluator

input, reference, output

factual, hallucinated

Evaluates whether an output contains information not available in the reference text given an input query.

QA

Evaluator

input, reference, output

correct, incorrect

Evaluates whether an output fully answers a question correctly given an input query and reference documents.

Relevance

Evaluator

input, reference

relevant, unrelated

Evaluates whether a reference document is relevant or irrelevant to the corresponding input.

Toxicity

Evaluator

input

toxic, non-toxic

Evaluates whether an input string contains racist, sexist, chauvinistic, biased, or otherwise toxic content.

Summarization

Evaluator

input, output

good, bad

Evaluates whether an output summary provides an accurate synopsis of an input document.

Code Generation

query, code

readable, unreadable

Evaluates whether code correctly implements the query

Toxicity

text

toxic, non-toxic

Evaluates whether text is toxic

Human Vs AI

question, correct_answer, ai_generated_answer

correct, incorrect

Compares human text vs generated text

Citation Evals

conversation, document_text

correct, incorrect

Check if the citation correctly answers the question by looking at the text on the cited page & conversation

User Frustration

conversation

frustrated, ok

Check if the user is frustrated in the conversation

SQL Generation

question, query_gen, response

correct, incorrect

Check if SQL Generation is correct based on the question

Tool Calling Eval

question, tool_call

correect, incorrect

Check if tool calling function calls and extracted params are correct.

Using LLM_Classify

The LLM classify uses a library to classify to generate Evals using LLM as a Judge template. Arize uses the Arize Phoenix open source library to run phoenix LLM as a Judge.

Setup the evaluation library

All of our evaluators are easily imported with the phoenix library, which you can install using this command below.

pip install "arize-phoenix[evals]"

Import the pre-tested evaluators along with the helper functions using this code snippet.

import phoenix.evals.templates.default_templates as templates
from phoenix.evals import (
    QA_PROMPT_RAILS_MAP,
    QA_PROMPT_TEMPLATE,
    HALLUCINATION_PROMPT_RAILS_MAP,
    HALLUCINATION_PROMPT_TEMPLATE,
    OpenAIModel,
    llm_classify,
)

Next, you need to setup the evaluators to use a specific large language model provider. This example uses OpenAIModel, but you can use any of our supported evaluation models. In this example, we will use the hallucination evaluator and the QA correctness evaluator.

openai_api_key = "YOUR_OPENAI_KEY"
os.environ["OPENAI_API_KEY"] = openai_api_key

eval_model = OpenAIModel(model="gpt-4-turbo-preview", api_key=openai_api_key)

Prepare your data

Our evaluation functions require dataframes to be passed with specific column names. You can construct these dataframes manually or you can manipulate the dataframes you retrieve from traces in Arize or traces in Phoenix.

For this example, we will create the dataframe from scratch to include the required columns we need -- input, reference, and output.

import pandas as pd
dataframe = pd.DataFrame(
    [
        {
            "input": "What is the capital of California?",
            "reference": "Sacramento is the capital of California.",
            "output": "Sacramento",
        },
        {
            "input": "What is the capital of California?",
            "reference": "Carson City is the Capital of Nevada.",
            "output": "Carson City",
        },
    ]
)

Run the eval

Then you can use the llm_classify function to run the evals on your dataframe.


rails = list(QA_PROMPT_RAILS_MAP.values())
qa_correctness_eval_df = llm_classify(
    dataframe=df_sample,
    template=QA_PROMPT_TEMPLATE,
    model=model,
    rails=rails,
    concurrency=20,
)

rails = list(HALLUCINATION_PROMPT_RAILS_MAP.values())
hallucination_eval_df = llm_classify(
    dataframe=df, 
    template=HALLUCINATION_PROMPT_TEMPLATE, 
    model=model, 
    rails=rails,
    provide_explanation=True, #optional to generate explanations for the value produced by the eval LLM
)

Now you have the results of your hallucination eval and QA correctness eval! See the results below when you print your results (data not included).

hallucination_eval_df

data (not included)labelexplanation
"input": "What is the capital of California?"

"reference": "Sacramento is the capital of California."

"output": "Sacramento"

factual

The query asks for the capital of California. The reference text directly states that "Sacramento is the capital of California." The answer provided, "Sacramento," directly matches the information given in the reference text. Therefore, the answer is based on the information provided in the reference text and does not contain any false information or assumptions not present in the reference text. This means the answer is factual and not a hallucination.

"input": "What is the capital of California?",
            "reference": "Carson City is the Capital of Nevada."
            "output": "Carson City"

hallucinated

The query asks for the capital of California, but the reference text provides information about the capital of Nevada, which is Carson City. The answer given, "Carson City," is incorrect for the query since Carson City is not the capital of California; Sacramento is. Therefore, the answer is not based on the reference text in relation to the query asked. It incorrectly assumes Carson City is the capital of California, which is a factual error and not supported by the reference text. Thus, the answer is a hallucination of facts because it provides information that is factually incorrect and not supported by the reference text.

qa_correctness_eval_df

data (not included)labelexplanation
"input": "What is the capital of California?"

"reference": "Sacramento is the capital of California."

"output": "Sacramento"

correct

The question asks for the capital of California. The reference text directly states that Sacramento is the capital of California. Therefore, the given answer, "Sacramento", directly matches the information provided in the reference text, accurately answering the question posed. There is no discrepancy between the question, the reference text, and the given answer, making the answer correct.

"input": "What is the capital of California?",

"reference": "Carson City is the Capital of Nevada."

"output": "Carson City"

incorrect

The question asks for the capital of California, but the answer provided is "Carson City," which the reference text correctly identifies as the capital of Nevada, not California. Therefore, the answer does not correctly answer the question about California's capital.'

If you'd like, you can log those evaluations back to Arize to save the results.

Supported Models

The models are instantiated and usable in the LLM Eval function. The models are also directly callable with strings.

model = OpenAIModel(model_name="gpt-4",temperature=0.6)
model("What is the largest costal city in France?")

We currently support a growing set of models for LLM Evals, please check out the API section for usage.

ModelSupport

GPT-4

GPT-4o

GPT-4o Mini

GPT-3.5 Turbo

GPT-3.5 Instruct

Claude 3.5 Sonnet

Claude 3.5 Opus

Claude 3.5 Haiku

Gemini 1.5 Pro

Gemin 1.5 Flash

Gemini 1.0 Pro

Llama 3.1 405B/70B/8B

Azure Hosted Open AI

Palm 2 Vertex

AWS Bedrock

Litellm

Huggingface Llama7B

(use litellm)

Anthropic

Cohere

(use litellm)

🎓 Learn more about the concept of LLM as a judge.

Last updated

Copyright © 2023 Arize AI, Inc