Arize Phoenix

AI Observability and Evaluation

Phoenix is an open-source observability library designed for experimentation, evaluation, and troubleshooting. It allows AI Engineers and Data Scientists to quickly visualize their data, evaluate performance, track down issues, and export data to improve. Phoenix is built by Arize AI, the company behind the the industry-leading AI observability platform, and a set of core contributors.

Install Phoenix

In your Jupyter or Colab environment, run the following command to install.

pip install arize-phoenix[evals]

For full details on how to run phoenix in various environments such as Databricks, consult our environments guide.

Quickstarts

Running Phoenix for the first time? Select a quickstart below.

Demo

Next Steps

Check out a comprehensive list of example notebooks for LLM Traces, Evals, RAG Analysis, and more.

Join the Phoenix Slack community to ask questions, share findings, provide feedback, and connect with other developers.

Supported Eval Models

The phoenix library supports a set of foundation models for Evals:

Direct Integrations:

  • OpenAI

  • Vertex AI

  • Azure Open AI

  • Anthropic

  • Mixtral/Mistral

  • AWS Bedrock

Partner Integrations:

  • Llama

  • Falcon

  • Code Llama

  • Local Hosted Models

Last updated