Q&A on Retrieved Data

This Eval evaluates whether a question was correctly answered by the system based on the retrieved data. In contrast to retrieval Evals that are checks on chunks of data returned, this check is a system level check of a correct Q&A.

  • question: This is the question the Q&A system is running against

  • sampled_answer: This is the answer from the Q&A system.

  • context: This is the context to be used to answer the question, and is what Q&A Eval must use to check the correct answer

Q&A Eval Template

You are given a question, an answer and reference text. You must determine whether the
given answer correctly answers the question based on the reference text. Here is the data:
    [BEGIN DATA]
    ************
    [Question]: {question}
    ************
    [Reference]: {context}
    ************
    [Answer]: {sampled_answer}
    [END DATA]
Your response must be a single word, either "correct" or "incorrect",
and should not contain any text or characters aside from that word.
"correct" means that the question is correctly and fully answered by the answer.
"incorrect" means that the question is not correctly or only partially answered by the
answer.

How To Run the Eval

import phoenix.evals.templates.default_templates as templates
from phoenix.evals import (
    OpenAIModel,
    download_benchmark_dataset,
    llm_classify,
)

model = OpenAIModel(
    model_name="gpt-4",
    temperature=0.0,
)

#The rails fore the output to specific values of the template
#It will remove text such as ",,," or "...", anything not the
#binary value expected from the template
rails = list(templates.QA_PROMPT_RAILS_MAP.values())
Q_and_A_classifications = llm_classify(
    dataframe=df_sample,
    template=templates.QA_PROMPT_TEMPLATE,
    model=model,
    rails=rails,
    provide_explanation=True, #optional to generate explanations for the value produced by the eval LLM
)

Benchmark Results

GPT-4 Results

GPT-3.5 Results

Claude V2 Results

Eval
GPT-4o
GPT-4
GPT-4 Turbo
Gemini Pro
GPT-3.5
GPT-3.5 Turbo Instruct
Palm (Text Bison)
Claude V2

Precision

1

1

1

1

0.99

0.42

1

1

Recall

0.89

0.92

0.98

0.98

0.83

1

0.94

0.64

F1

0.94

0.96

0.99

0.99

0.90

0.59

0.97

0.78

Throughput
GPT-4
GPT-4 Turbo
GPT-3.5

100 Samples

124 Sec

66 sec

67 sec

Last updated

Copyright © 2023 Arize AI, Inc