Arize Phoenix
AI Observability and Evaluation
Last updated
Was this helpful?
AI Observability and Evaluation
Last updated
Was this helpful?
Phoenix is an open-source observability library designed for experimentation, evaluation, and troubleshooting. It allows AI Engineers and Data Scientists to quickly visualize their data, evaluate performance, track down issues, and export data to improve. Phoenix is built by Arize AI, the company behind the industry-leading AI observability platform, and a set of core contributors.
In your Python, Jupyter, or Colab environment, run the following command to install.
For full details on how to run phoenix in various environments such as Databricks, consult our environments guide.
Phoenix works with OpenTelemetry and OpenInference instrumentation. If you are looking to deploy phoenix as a service rather than a library, see Self-hosting
Phoenix offers tools to streamline your prompt engineering workflow.
Prompt Management - Create, store, modify, and deploy prompts for interacting with LLMs
Prompt Playground - Play with prompts, models, invocation parameters and track your progress via tracing and experiments
Span Replay - Replay the invocation of an LLM. Whether it's an LLM step in an LLM workflow or a router query, you can step into the LLM invocation and see if any modifications to the invocation would have yielded a better outcome.
Prompts in Code - Phoenix offers client SDKs to keep your prompts in sync across different applications and environments.
Running Phoenix for the first time? Select a quickstart below.
The main Phoenix package is arize-phoenix. We offer several packages below for specific use cases.
arize-phoenix
Running and connecting to the Phoenix client. Used: - Self-hosting Phoenix - Connecting to a Phoenix client (either Phoenix Developer Edition or self-hosted) to query spans, run evaluations, generate datasets, etc. *arize-phoenix automatically includes arize-phoenix-otel and arize-phoenix evals
arize-phoenix-otel
Sending OpenTelemetry traces to a Phoenix instance
arize-phoenix-evals
Running evaluations in your environment
openinference-semantic-conventions
Our semantic layer to add LLM telemetry to OpenTelemetry
openinference-instrumentation-xxxx
Automatically instrumenting popular packages.
Check out a comprehensive list of example notebooks for LLM Traces, Evals, RAG Analysis, and more.
Join the Phoenix Slack community to ask questions, share findings, provide feedback, and connect with other developers.