Last updated
Copyright © 2023 Arize AI, Inc
Last updated
In addition to LLM Classify (a lower level evaluator) Arize also supports some more general evaluator abstractions. These are designed to be slightly more out of the box, ease of use, but also don't have the configuration options as LLM classify.
We have built simple functions for using our eval prompt templates. These prompts are tested against and target precision at 70-90% and F1 at 70-85%. We use , our open-source package to run evaluations.
To use our evaluators, follow these steps below.
We offer five out of the boc evaluators and working on adding more! The prompt template has a few required columns that you will need to specify in the dataframe that you pass into the evaluator. See the for more details.
You can view the exact prompt templates , and their benchmarks in our .
All of our evaluators are easily imported with the phoenix library, which you can install using this command below.
Import the pre-tested evaluators along with the helper functions using this code snippet.
For this example, we will create the dataframe from scratch to include the required columns we need -- input
, reference
, and output
.
Now you have the results of your hallucination eval and QA correctness eval! See the results below when you print your results (data not included).
hallucination_eval_df
qa_correctness_eval_df
The models are instantiated and usable in the LLM Eval function. The models are also directly callable with strings.
Next, you need to setup the evaluators to use a specific large language model provider. This example uses OpenAIModel
, but you can use any of our . In this example, we will use the hallucination evaluator and the QA correctness evaluator.
Our require dataframes to be passed with specific column names. You can construct these dataframes manually or you can manipulate the dataframes you retrieve from or .
Then you can use the function to run the evals on your dataframe.
If you'd like, you can to save the results.
We currently support a growing set of models for LLM Evals, please check out the
factual
The query asks for the capital of California. The reference text directly states that "Sacramento is the capital of California." The answer provided, "Sacramento," directly matches the information given in the reference text. Therefore, the answer is based on the information provided in the reference text and does not contain any false information or assumptions not present in the reference text. This means the answer is factual and not a hallucination.
hallucinated
The query asks for the capital of California, but the reference text provides information about the capital of Nevada, which is Carson City. The answer given, "Carson City," is incorrect for the query since Carson City is not the capital of California; Sacramento is. Therefore, the answer is not based on the reference text in relation to the query asked. It incorrectly assumes Carson City is the capital of California, which is a factual error and not supported by the reference text. Thus, the answer is a hallucination of facts because it provides information that is factually incorrect and not supported by the reference text.