Arize Phoenix
AI Observability and Evaluation
Last updated
AI Observability and Evaluation
Last updated
Phoenix is an open-source observability library designed for experimentation, evaluation, and troubleshooting. It allows AI Engineers and Data Scientists to quickly visualize their data, evaluate performance, track down issues, and export data to improve. Phoenix is built by Arize AI, the company behind the the industry-leading AI observability platform, and a set of core contributors.
In your Jupyter or Colab environment, run the following command to install.
For full details on how to run phoenix in various environments such as Databricks, consult our environments guide.
Phoenix works with OpenTelemetry and OpenInference instrumentation. If you are looking to deploy phoenix as a service rather than a library, see Self-hosting
Running Phoenix for the first time? Select a quickstart below.
The main Phoenix package is arize-phoenix. We offer several helper packages below for specific use cases.
Check out a comprehensive list of example notebooks for LLM Traces, Evals, RAG Analysis, and more.
Join the Phoenix Slack community to ask questions, share findings, provide feedback, and connect with other developers.
Package | What It's For | Pypi |
---|---|---|
arize-phoenix
Running and connecting to the Phoenix client. Used: - Self-hosting Phoenix - Connecting to a Phoenix client (either Phoenix Developer Edition or self-hosted) to query spans, run evaluations, generate datasets, etc. *arize-phoenix automatically includes arize-phoenix-otel and arize-phoenix evals
arize-phoenix-otel
Sending OpenTelemetry traces to a Phoenix instance
arize-phoenix-evals
Running evaluations in your environment
openinference-semantic-conventions
Our semantic layer to add LLM telemetry to OpenTelemetry
openinference-instrumentation-xxxx
Automatically instrumenting popular packages.