LogoLogo
Python SDKSlack
  • Documentation
  • Cookbooks
  • Self-Hosting
  • Release Notes
  • Reference
  • Arize AI
  • Quickstarts
  • ✨Arize Copilot
  • Arize AI for Agents
  • Concepts
    • Agent Evaluation
    • Tracing
      • What is OpenTelemetry?
      • What is OpenInference?
      • Openinference Semantic Conventions
    • Evaluation
  • 🧪Develop
    • Quickstart: Experiments
    • Datasets
      • Create a dataset
      • Update a dataset
      • Export a dataset
    • Experiments
      • Run experiments
      • Run experiments with code
        • Experiments SDK differences in AX vs Phoenix
        • Log experiment results via SDK
      • Evaluate experiments
      • Evaluate experiment with code
      • CI/CD with experiments
        • Github Action Basics
        • Gitlab CI/CD Basics
      • Download experiment
    • Prompt Playground
      • Use tool calling
      • Use image inputs
      • Replay spans
      • Compare prompts side-by-side
      • Load a dataset into playground
      • Save playground outputs as an experiment
      • ✨Copilot: prompt builder
    • Playground Integrations
      • OpenAI
      • Azure OpenAI
      • AWS Bedrock
      • VertexAI
      • Custom LLM Models
    • Prompt Hub
  • 🧠Evaluate
    • Online Evals
      • Run evaluations in the UI
      • Run evaluations with code
      • Test LLM evaluator in playground
      • View task details & logs
      • ✨Copilot: Eval Builder
      • ✨Copilot: Eval Analysis
      • ✨Copilot: RAG Analysis
    • Experiment Evals
    • LLM as a Judge
      • Custom Eval Templates
      • Arize Templates
        • Agent Tool Calling
        • Agent Tool Selection
        • Agent Parameter Extraction
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Hallucinations
        • Q&A on Retrieved Data
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Citation
        • User Frustration
        • SQL Generation
    • Code Evaluations
    • Human Annotations
  • 🔭Observe
    • Quickstart: Tracing
    • Tracing
      • Setup tracing
      • Trace manually
        • Trace inputs and outputs
        • Trace function calls
        • Trace LLM, Retriever and Tool Spans
        • Trace prompt templates & variables
        • Trace as Inferences
        • Send Traces from Phoenix -> Arize
        • Advanced Tracing (OTEL) Examples
      • Add metadata
        • Add events, exceptions and status
        • Logging Latent Metadata
        • Add attributes, metadata and tags
        • Send data to a specific project
        • Get the current span context and tracer
      • Configure tracing options
        • Configure OTEL tracer
        • Mask span attributes
        • Redact sensitive data from traces
        • Instrument with OpenInference helpers
      • Query traces
        • Filter Traces
          • Time Filtering
        • Export Traces
        • ✨AI Powered Search & Filter
        • ✨AI Powered Trace Analysis
        • ✨AI Span Analysis & Evaluation
    • Tracing Integrations
      • OpenAI
      • OpenAI Agents SDK
      • LlamaIndex
      • LlamaIndex Workflows
      • LangChain
      • LangGraph
      • Hugging Face smolagents
      • Autogen
      • Google GenAI (Gemini)
      • Model Context Protocol (MCP)
      • Vertex AI
      • Amazon Bedrock
      • Amazon Bedrock Agents
      • MistralAI
      • Anthropic
      • LangFlow
      • Haystack
      • LiteLLM
      • CrewAI
      • Groq
      • DSPy
      • Guardrails AI
      • Prompt flow
      • Vercel AI SDK
      • Llama
      • Together AI
      • OpenTelemetry (arize-otel)
      • BeeAI
    • Evals on Traces
    • Guardrails
    • Sessions
    • Dashboards
      • Dashboard Widgets
      • Tracking Token Usage
      • ✨Copilot: Dashboard Widget Creation
    • Monitors
      • Integrations: Monitors
        • Slack
          • Manual Setup
        • OpsGenie
        • PagerDuty
      • LLM Red Teaming
    • Custom Metrics & Analytics
      • Arize Query Language Syntax
        • Conditionals and Filters
        • All Operators
        • All Functions
      • Custom Metric Examples
      • ✨Copilot: ArizeQL Generator
  • 📈Machine Learning
    • Machine Learning
      • User Guide: ML
      • Quickstart: ML
      • Concepts: ML
        • What Is A Model Schema
        • Delayed Actuals and Tags
        • ML Glossary
      • How To: ML
        • Upload Data to Arize
          • Pandas SDK Example
          • Local File Upload
            • File Upload FAQ
          • Table Ingestion Tuning
          • Wildcard Paths for Cloud Storage
          • Troubleshoot Data Upload
          • Sending Data FAQ
        • Monitors
          • ML Monitor Types
          • Configure Monitors
            • Notifications Providers
          • Programmatically Create Monitors
          • Best Practices for Monitors
        • Dashboards
          • Dashboard Widgets
          • Dashboard Templates
            • Model Performance
            • Pre-Production Performance
            • Feature Analysis
            • Drift
          • Programmatically Create Dashboards
        • Performance Tracing
          • Time Filtering
          • ✨Copilot: Performance Insights
        • Drift Tracing
          • ✨Copilot: Drift Insights
          • Data Distribution Visualization
          • Embeddings for Tabular Data (Multivariate Drift)
        • Custom Metrics
          • Arize Query Language Syntax
            • Conditionals and Filters
            • All Operators
            • All Functions
          • Custom Metric Examples
          • Custom Metrics Query Language
          • ✨Copilot: ArizeQL Generator
        • Troubleshoot Data Quality
          • ✨Copilot: Data Quality Insights
        • Explainability
          • Interpreting & Analyzing Feature Importance Values
          • SHAP
          • Surrogate Model
          • Explainability FAQ
          • Model Explainability
        • Bias Tracing (Fairness)
        • Export Data to Notebook
        • Automate Model Retraining
        • ML FAQ
      • Use Cases: ML
        • Binary Classification
          • Fraud
          • Insurance
        • Multi-Class Classification
        • Regression
          • Lending
          • Customer Lifetime Value
          • Click-Through Rate
        • Timeseries Forecasting
          • Demand Forecasting
          • Churn Forecasting
        • Ranking
          • Collaborative Filtering
          • Search Ranking
        • Natural Language Processing (NLP)
        • Common Industry Use Cases
      • Integrations: ML
        • Google BigQuery
          • GBQ Views
          • Google BigQuery FAQ
        • Snowflake
          • Snowflake Permissions Configuration
        • Databricks
        • Google Cloud Storage (GCS)
        • Azure Blob Storage
        • AWS S3
          • Private Image Link Access Via AWS S3
        • Kafka
        • Airflow Retrain
        • Amazon EventBridge Retrain
        • MLOps Partners
          • Algorithmia
          • Anyscale
          • Azure & Databricks
          • BentoML
          • CML (DVC)
          • Deepnote
          • Feast
          • Google Cloud ML
          • Hugging Face
          • LangChain 🦜🔗
          • MLflow
          • Neptune
          • Paperspace
          • PySpark
          • Ray Serve (Anyscale)
          • SageMaker
            • Batch
            • RealTime
            • Notebook Instance with Greater than 20GB of Data
          • Spell
          • UbiOps
          • Weights & Biases
      • API Reference: ML
        • Python SDK
          • Pandas Batch Logging
            • Client
            • log
            • Schema
            • TypedColumns
            • EmbeddingColumnNames
            • ObjectDetectionColumnNames
            • PromptTemplateColumnNames
            • LLMConfigColumnNames
            • LLMRunMetadataColumnNames
            • NLP_Metrics
            • AutoEmbeddings
            • utils.types.ModelTypes
            • utils.types.Metrics
            • utils.types.Environments
          • Single Record Logging
            • Client
            • log
            • TypedValue
            • Ranking
            • Multi-Class
            • Object Detection
            • Embedding
            • LLMRunMetadata
            • utils.types.ModelTypes
            • utils.types.Metrics
            • utils.types.Environments
        • Java SDK
          • Constructor
          • log
          • bulkLog
          • logValidationRecords
          • logTrainingRecords
        • R SDK
          • Client$new()
          • Client$log()
        • Rest API
    • Computer Vision
      • How to: CV
        • Generate Embeddings
          • How to Generate Your Own Embedding
          • Let Arize Generate Your Embeddings
        • Embedding & Cluster Analyzer
        • ✨Copilot: Embedding Summarization
        • Similarity Search
        • Embedding Drift
        • Embeddings FAQ
      • Integrations: CV
      • Use Cases: CV
        • Image Classification
        • Image Segmentation
        • Object Detection
      • API Reference: CV
Powered by GitBook
On this page
  • Overview
  • Choose an evaluator
  • Have Copilot Choose an Evaluator
  • Using LLM_Classify
  • Setup the evaluation library
  • Prepare your data
  • Run the eval
  • Supported Models

Was this helpful?

  1. 🧠Evaluate
  2. LLM as a Judge

Arize Templates

Last updated 15 days ago

Was this helpful?

Support

  • Chat Us On Slack
  • support@arize.com

Get Started

  • Signup For Free
  • Book A Demo

Copyright © 2025 Arize AI, Inc

Overview

We have built simple functions for using our eval prompt templates. These prompts are tested against benchmarked datasets and target precision at 70-90% and F1 at 70-85%. We use Phoenix, our open-source package to run evaluations.

To use our evaluators, follow these steps below.

  1. Choose an evaluator

  2. Setup the evaluation library

  3. Prepare your data

  4. Run the eval

Choose an evaluator

llm_classify runs LLM as a Judge across your LLM outputs. You can use any of the evaluation templates below. You can see notebook tutorials on how to use these in our Phoenix repo.

Evaluator
Required Columns
Output Labels
Use

Hallucination

Evaluator

input, reference, output

factual, hallucinated

Evaluates whether an output contains information not available in the reference text given an input query.

QA

Evaluator

input, reference, output

correct, incorrect

Evaluates whether an output fully answers a question correctly given an input query and reference documents.

Relevance

Evaluator

input, reference

relevant, unrelated

Evaluates whether a reference document is relevant or irrelevant to the corresponding input.

Toxicity

Evaluator

input

toxic, non-toxic

Evaluates whether an input string contains racist, sexist, chauvinistic, biased, or otherwise toxic content.

Summarization

Evaluator

input, output

good, bad

Evaluates whether an output summary provides an accurate synopsis of an input document.

Code Generation

query, code

readable, unreadable

Evaluates whether code correctly implements the query

Toxicity

text

toxic, non-toxic

Evaluates whether text is toxic

Human Vs AI

question, correct_answer, ai_generated_answer

correct, incorrect

Compares human text vs generated text

Citation Evals

conversation, document_text

correct, incorrect

Check if the citation correctly answers the question by looking at the text on the cited page & conversation

User Frustration

conversation

frustrated, ok

Check if the user is frustrated in the conversation

SQL Generation

question, query_gen, response

correct, incorrect

Check if SQL Generation is correct based on the question

Tool Calling Eval

question, tool_call

correect, incorrect

Check if tool calling function calls and extracted params are correct.

Have Copilot Choose an Evaluator

If you are unsure where eval to choose,✨Copilot can choose for you. Navigate to the main chat in the UI and ask Copilot to suggest a Phoenix eval for your application.

Using LLM_Classify

The LLM Classify uses a library to classify and generate Evals. Arize uses the Arize Phoenix open source library to run phoenix LLM as a Judge.

Setup the evaluation library

All of our evaluators are easily imported with the phoenix library, which you can install using this command below.

pip install -q "arize-phoenix>=4.29.0"
pip install -q openai

Import the pre-tested evaluators along with the helper functions using this code snippet.

import phoenix.evals
import os 
from phoenix.evals import (
    QA_PROMPT_RAILS_MAP,
    QA_PROMPT_TEMPLATE,
    HALLUCINATION_PROMPT_RAILS_MAP,
    HALLUCINATION_PROMPT_TEMPLATE,
    OpenAIModel,
    llm_classify,
)

Next, you need to setup the evaluators to use a specific large language model provider. This example uses OpenAIModel, but you can use any of our supported evaluation models. In this example, we will use the hallucination evaluator and the QA correctness evaluator.

openai_api_key = "YOUR_OPENAI_KEY"
os.environ["OPENAI_API_KEY"] = openai_api_key

eval_model = OpenAIModel(model="gpt-4o", api_key=openai_api_key)

Prepare your data

Our evaluation functions require dataframes to be passed with specific column names. You can construct these dataframes manually or you can manipulate the dataframes you retrieve from traces in Arize or traces in Phoenix.

For this example, we will create the dataframe from scratch to include the required columns we need -- input, reference, and output.

import pandas as pd
dataframe = pd.DataFrame(
    [
        {
            "input": "What is the capital of California?",
            "reference": "Sacramento is the capital of California.",
            "output": "Sacramento",
        },
        {
            "input": "What is the capital of California?",
            "reference": "Carson City is the Capital of Nevada.",
            "output": "Carson City",
        },
    ]
)

Run the eval

Then you can use the llm_classify function to run the evals on your dataframe.

rails = list(QA_PROMPT_RAILS_MAP.values())
qa_correctness_eval_df = llm_classify(
    dataframe=dataframe,
    template=QA_PROMPT_TEMPLATE,
    model=eval_model,
    rails=rails,
    concurrency=20,
)

rails = list(HALLUCINATION_PROMPT_RAILS_MAP.values())
hallucination_eval_df = llm_classify(
    dataframe=dataframe, 
    template=HALLUCINATION_PROMPT_TEMPLATE, 
    model=eval_model, 
    rails=rails,
    provide_explanation=True, #optional to generate explanations for the value produced by the eval LLM
)

Now you have the results of your hallucination eval and QA correctness eval! See the results below when you print your results (data not included).

hallucination_eval_df

data (not included)
label
explanation
"input": "What is the capital of California?"

"reference": "Sacramento is the capital of California."

"output": "Sacramento"

factual

The query asks for the capital of California. The reference text directly states that "Sacramento is the capital of California." The answer provided, "Sacramento," directly matches the information given in the reference text. Therefore, the answer is based on the information provided in the reference text and does not contain any false information or assumptions not present in the reference text. This means the answer is factual and not a hallucination.

"input": "What is the capital of California?",
            "reference": "Carson City is the Capital of Nevada."
            "output": "Carson City"

hallucinated

The query asks for the capital of California, but the reference text provides information about the capital of Nevada, which is Carson City. The answer given, "Carson City," is incorrect for the query since Carson City is not the capital of California; Sacramento is. Therefore, the answer is not based on the reference text in relation to the query asked. It incorrectly assumes Carson City is the capital of California, which is a factual error and not supported by the reference text. Thus, the answer is a hallucination of facts because it provides information that is factually incorrect and not supported by the reference text.

qa_correctness_eval_df

data (not included)
label
explanation
"input": "What is the capital of California?"

"reference": "Sacramento is the capital of California."

"output": "Sacramento"

correct

The question asks for the capital of California. The reference text directly states that Sacramento is the capital of California. Therefore, the given answer, "Sacramento", directly matches the information provided in the reference text, accurately answering the question posed. There is no discrepancy between the question, the reference text, and the given answer, making the answer correct.

"input": "What is the capital of California?",

"reference": "Carson City is the Capital of Nevada."

"output": "Carson City"

incorrect

The question asks for the capital of California, but the answer provided is "Carson City," which the reference text correctly identifies as the capital of Nevada, not California. Therefore, the answer does not correctly answer the question about California's capital.'

If you'd like, you can log those evaluations back to Arize to save the results.

Supported Models

The models are instantiated and usable in the LLM Eval function. The models are also directly callable with strings.

model = OpenAIModel(model_name="gpt-4",temperature=0.6)
model("What is the largest costal city in France?")

We currently support a growing set of models for LLM Evals, please check out the API section for usage.

Model
Support

GPT-4

✔

GPT-4o

✔

GPT-4o Mini

✔

GPT-3.5 Turbo

✔

GPT-3.5 Instruct

✔

Claude 3.5 Sonnet

✔

Claude 3.5 Opus

✔

Claude 3.5 Haiku

✔

Gemini 1.5 Pro

✔

Gemin 1.5 Flash

✔

Gemini 1.0 Pro

✔

Llama 3.1 405B/70B/8B

✔

Azure Hosted Open AI

✔

Palm 2 Vertex

✔

AWS Bedrock

✔

Litellm

✔

Huggingface Llama7B

(use litellm)

Anthropic

✔

Cohere

(use litellm)

🎓 Learn more about the concept of LLM as a judge.

phoenix/tutorials/evals at main · Arize-ai/phoenixGitHub
Logo