Summarization

When To Use Summarization Eval Template

This Eval helps evaluate the summarization results of a summarization task. The template variables are:

  • document: The document text to summarize

  • summary: The summary of the document

Summarization Eval Template

Try it out!
You are comparing the summary text and it's original document and trying to determine
if the summary is good. Here is the data:
    [BEGIN DATA]
    ************
    [Summary]: {output}
    ************
    [Original Document]: {input}
    [END DATA]
Compare the Summary above to the Original Document and determine if the Summary is
comprehensive, concise, coherent, and independent relative to the Original Document.
Your response must be a single word, either "good" or "bad", and should not contain any text
or characters aside from that. "bad" means that the Summary is not comprehensive,
concise, coherent, and independent relative to the Original Document. "good" means the
Summary is comprehensive, concise, coherent, and independent relative to the Original Document.

We are continually iterating our templates, view the most up-to-date template on GitHub.

Benchmark Results

GPT-4 Results

GPT-3.5 Results

Claud V2 Results

GPT-4 Turbo

How To Run the Eval

import phoenix.evals.default_templates as templates
from phoenix.evals import (
    OpenAIModel,
    download_benchmark_dataset,
    llm_classify,
)

model = OpenAIModel(
    model_name="gpt-4",
    temperature=0.0,
)

#The rails is used to hold the output to specific values based on the template
#It will remove text such as ",,," or "..."
#Will ensure the binary value expected from the template is returned 
rails = list(templates.SUMMARIZATION_PROMPT_RAILS_MAP.values())
summarization_classifications = llm_classify(
    dataframe=df_sample,
    template=templates.SUMMARIZATION_PROMPT_TEMPLATE,
    model=model,
    rails=rails,
    provide_explanation=True, #optional to generate explanations for the value produced by the eval LLM
)

The above shows how to use the summarization Eval template.

Eval
GPT-4o
GPT-4
GPT-4 Turbo
Gemini Pro
GPT-3.5
GPT-3.5 Instruct
Palm 2 (Text Bison)
Claud V2
Llama 7b (soon)

Precision

0.87

0.79

0.94

0.61

1

1

0.57

0.75

Recall

0.63

0.88

0.641

1.0

0.1

0.16

0.7

0.61

F1

0.73

0.83

0.76

0.76

0.18

0.280

0.63

0.67

Last updated