There are subtle differences between the experiments SDK using Arize vs. Phoenix, but the base concepts are the same. The example below runs an experiment to write a haiku, and evaluate its tone using an LLM Eval.
You can check out a full notebook example of each.
Arize uses the ArizeDatasetsClient . arize_client.create_dataset returns a dataset_id, instead of a dataset object. So if you want to print or manipulate the dataset, you will need to get the dataset using arize_client.get_dataset.
Phoenix uses px.Client().upload_dataset.
# Create dataframe to upload
data = [{"topic": "Zebras"}]
df = pd.DataFrame(data)
#############
# FOR ARIZE
#############
# Setup Imports
import pandas as pd
from arize.experimental.datasets import ArizeDatasetsClient
from arize.experimental.datasets.utils.constants import GENERATIVE
from uuid import uuid1
# Setup Arize datasets connection
developer_key = ""
space_id = ""
api_key = ""
arize_client = ArizeDatasetsClient(developer_key=developer_key, api_key=api_key)
# Create dataset in Arize
dataset_id = arize_client.create_dataset(
dataset_name="haiku-topics"+ str(uuid1())[:5],
data=df,
space_id=space_id,
dataset_type=GENERATIVE
)
# Get dataset from Arize
dataset = arize_client.get_dataset(
space_id=space_id,
dataset_id=dataset_id
)
#############
# FOR PHOENIX
#############
import phoenix as px
# Upload dataset to Phoenix
dataset = px.Client().upload_dataset(
dataset_name="haiku-topics",
dataframe=df,
input_keys=("topic")
)
Task definition
In Arize, we use data from the dataset_row as prompt template variables. The possible variables to pass in are:
input, expected, dataset_row, metadata .
In Phoenix, you can do this using example . The possible variables to pass in are:
input, expected, reference, example, metadata
#############
# FOR ARIZE
#############
def create_haiku(dataset_row) -> str:
# Dataset row uses the dataframe from above
topic = dataset_row.get("topic")
# send topic to LLM generation
#############
# FOR PHOENIX
#############
def create_haiku(example) -> str:
topic = example.get("topic")
# send topic to LLM generation
Evaluator definition
For both Arize and Phoenix, you can often use the exact same function as your evaluator. Phoenix does have slightly different way of accessing metadata from your dataset.
Arize uses input, output, dataset_row, metadata as the optional input variables to pass into the function.
Phoenix uses input, expected, reference, example, metadata as the input variables to pass into the function.
# FOR ARIZE IMPORT
from arize.experimental.datasets.experiments.evaluators.base import EvaluationResult
# FOR PHOENIX IMPORT
from phoenix.experiments.types import EvaluationResult
#############
# FOR ARIZE AND PHOENIX
#############
from phoenix.evals import (
OpenAIModel,
llm_classify,
)
CUSTOM_TEMPLATE = """
You are evaluating whether tone is positive, neutral, or negative
[Message]: {output}
Respond with either "positive", "neutral", or "negative"
"""
def is_positive(output):
df_in = pd.DataFrame({"output": output}, index=[0])
eval_df = llm_classify(
dataframe=df_in,
template=CUSTOM_TEMPLATE,
model=OpenAIModel(model="gpt-4o"),
rails=["positive", "neutral", "negative"],
provide_explanation=True
)
# return score, label, explanation
return EvaluationResult(score=1, label=eval_df['label'][0], explanation=eval_df['explanation'][0])
Run the experiment
Arize and Phoenix use slightly different functions to run an experiment due to the permissioning available in Arize.
In Arize, you must pass in the dataset_id and space_id.
In Phoenix, you must pass in the dataset object itself.
#############
# FOR ARIZE
#############
# Uses ArizeDatasetsClient from above
experiment_id = arize_client.run_experiment(
space_id=space_id,
dataset_id=dataset_id,
task=create_haiku,
evaluators=[is_positive],
experiment_name="haiku-example"
)
#############
# FOR PHOENIX
#############
from phoenix.experiments import run_experiment
experiment_results = run_experiment(
dataset=dataset,
task=create_haiku,
evaluators=[is_positive],
experiment_name="haiku-example"
)
Last updated
Was this helpful?
This site uses cookies to deliver its service and to analyze traffic. By browsing this site, you accept the privacy policy.