NLP_Metrics

Install extra dependencies to compute LLM evaluation metrics

Install extra dependencies in the SDK:

!pip install arize[NLP_Metrics] 

Import metrics from arize.pandas.generative.nlp_metrics

Metrics

bleu

BLEU score is typically used to evaluate the quality of machine-translated text from one natural language to another. BLEU calculates scores for individual translated segments, typically sentences, by comparing them to a set of high-quality reference translations. These scores are then averaged over the entire corpus to obtain an estimate of the overall quality of the translation.

Code Example

bleu_scores = bleu(
    response_col=df['summary'], 
    references_col=df['reference_summary'],
    max_order=5, # optional field
    smooth=True # optional field
    )

sacre_bleu

A hassle-free computation of shareable, comparable, and reproducible BLEU scores. Inspired by Rico Sennrich’s multi-bleu-detok.perl, it produces the official Workshop on Machine Translation (WMT) scores but works with plain text.

Code Example

sacre_bleu_scores = sacre_bleu(
    response_col=df['summary'], 
    references_col=df['reference_summary'],
    smooth_method='floor', # optional field
    smooth_value=0.1, # defaulted to 0.1 since smooth_method is 'floor'
    lowercase=True # optional field
    )

google_bleu

BLEU score is typically used as a corpus measure, and it has some limitations when applied to single sentences. To overcome this issue in RL experiments, there exists a variation called the GLEU score. The GLEU score is the minimum of recall and precision.

Code Example

google_bleu_scores = google_bleu(
    response_col=df['summary'], 
    references_col=df['reference_summary'], 
    min_len=2, # optional field
    max_len=5 # optional field
    )

rouge

A software package and a set of metrics commonly used to evaluate machine translation and automatic summarization software in natural language processing. These metrics involve comparing a machine-produced summary or translation with a reference or set of references that have been human-produced.

Code Example

rouge_scores = rouge(
    response_col=df['summary'], 
    references_col=df['reference_summary'], 
    rouge_types=['rouge1', 'rouge2', 'rougeL', 'rougeLsum'] # optional field
    )

meteor

An automatic metric typically used to evaluate machine translation, which is based on a generalized concept of unigram matching between the machine-produced translation and the reference human-produced translations.

Code Example

meteor_scores = meteor(
    response_col=df['summary'], 
    references_col=df['reference_summary'], 
    alpha=0.8, # optional field
    beta=4, # optional field
    gamma=0.4 # optional field
    )

Last updated

Copyright © 2023 Arize AI, Inc