Release Notes
04.09.2025: Project Management API Enhancements ✨
Available in Phoenix 8.24+
This update enhances the Project Management API with more flexible project identification:
Enhanced project identification: Added support for identifying projects by both ID and hex-encoded name and introduced a new
_get_project_by_identifier
helper function
Also includes streamlined operations, better validation & error handling, and expanded test coverage
04.09.2025: New REST API for Projects with RBAC 📽️
Available in Phoenix 8.23+
This release introduces a REST API for managing projects, complete with full CRUD functionality and robust access control. Key features include:
CRUD Operations: Create, read, update, and delete projects via the new API endpoints.
Role-Based Access Control:
Admins can create, read, update, and delete projects
Members can create and read projects, but cannot modify or delete them.
Additional Safeguards: Immutable Project Names, Default Project Protection, Comprehensive Integration Tests
04.03.2025: Phoenix Client Prompt Tagging 🏷️
Available in Phoenix 8.22+
We’ve added support for Prompt Tagging in the Phoenix client. This new feature gives you more control and visibility over your prompts throughout the development lifecycle.
Tag prompts directly in your code and see those tags reflected in the Phoenix UI.
Label prompt versions as
development
,staging
, orproduction
— or define your own custom tags.Add tag descriptions to provide additional context or list out all tags
04.02.2025 Improved Span Annotation Editor ✍️
Available in Phoenix 8.21+
The new span aside moves the Span Annotation editor into a dedicated panel, providing a clearer view for adding annotations and enhancing customization of your setup. Read this documentation to learn how annotations can be used.
04.01.2025: Support for MCP Span Tool Info in OpenAI Agents SDK 🔨
Available in Phoenix 8.20+
Newly added to the OpenAI Agent SDK is support for MCP Span Info, allowing for the tracing and extraction of useful information about MCP tool listings. Use the Phoenix OpenAI Agents SDK for powerful agent tracing.
03.27.2025 Span View Improvements 👀
Available in Phoenix 8.19+
You can now toggle the option to treat orphan spans as root when viewing your spans. Additionally, we've enhanced the UI with an icon view in span details for better visibility in smaller displays. Learn more here.
03.24.2025: Tracing Configuration Tab 🖌️
Available in Phoenix 8.19+
Within each project, there is now a Config tab to enhance customization. The default tab can now be set per project, ensuring the preferred view is displayed.
Learn more in projects docs.
03.19.2025: Access to New Integrations in Projects 🔌
Available in Phoenix 8.15+
In the New Project tab, we've added quick setup to instrument your application for BeeAI, SmolAgents, and the OpenAI Agents SDK. Easily configure these integrations with streamlined instructions.
Check out all Phoenix tracing integrations here.
03.18.2025: Resize Span, Trace, and Session Tables 🔀
Available in Phoenix 8.14+
We've added the ability to resize Span, Trace, and Session tables. Resizing preferences are now persisted in the tracing store, ensuring settings are maintained per-project and per-table.
03.14.2025: OpenAI Agents Instrumentation 📡
Available in Phoenix 8.13+
We've introduced the OpenAI Agents SDK for Python which provides enhanced visibility into agent behavior and performance.
Installation
Includes an OpenTelemetry Instrumentor that traces agents, LLM calls, tool usage, and handoffs.
With minimal setup, use the
register
function to connect your app to Phoenix and view real-time traces of agent workflows.
For more details on a quick setup, check out our docs.
03.07.2025: New Prompt Playground, Evals, and Integration Support 🦾
Available in Phoenix 8.9+
Prompt Playground: Now supports GPT-4.5 & Anthropic Sonnet 3.7 and Thinking Budgets
Instrumentation: SmolagentsInstrumentor to trace smolagents by Hugging Face
Evals: o3 support, Audio & Multi-Modal Evaluations
Integrations: Phoenix now supports LiteLLM Proxy & Cleanlabs evals
03.06.2025: Project Improvements 📽️
Available in Phoenix 8.8+
We’ve introduced several enhancements to Projects, providing greater flexibility and control over how you interact with data. These updates include:
Persistent Column Selection on Tables: Your selected columns will now remain consistent across sessions, ensuring a more seamless workflow.
Metadata Filters from the Table: Easily filter data directly from the table view using metadata attributes.
Custom Time Ranges: You can now specify custom time ranges to filter traces and spans.
Root Span Filter for Spans: Improved filtering options allow you to filter by root spans, helping to isolate and debug issues more effectively.
Metadata Quick Filters: Quickly apply common metadata filters for faster navigation.
Performance: Major speed improvements in project tracing views & visibility into database usage in settings
Check out projects docs for more!
02.19.2025: Prompts 📃
Available in Phoenix 8.0+
Phoenix prompt management will now let you create, modify, tag, and version control prompts for your applications. Some key highlights from this release:
Versioning & Iteration: Seamlessly manage prompt versions in both Phoenix and your codebase.
New TypeScript Client: Sync prompts with your JavaScript runtime, now with native support for OpenAI, Anthropic, and the Vercel AI SDK.
New Python Client: Sync templates and apply them to AI SDKs like OpenAI, Anthropic, and more.
Standardized Prompt Handling: Native normalization for OpenAI, Anthropic, Azure OpenAI, and Google AI Studio.
Enhanced Metadata Propagation: Track prompt metadata on Playground spans and experiment metadata in dataset runs.
02.18.2025: One-Line Instrumentation⚡️
Available in Phoenix 8.0+
Phoenix has made it even simpler to get started with tracing by introducing one-line auto-instrumentation. By using register(auto_instrument=True)
, you can enable automatic instrumentation in your application, which will set up instrumentors based on your installed packages.
For more details, you can check the docs and explore further tracing options.
01.18.2025: Automatic & Manual Span Tracing ⚙️
Available in Phoenix 7.9+
In addition to using our automatic instrumentors and tracing directly using OTEL, we've now added our own layer to let you have the granularity of manual instrumentation without as much boilerplate code.
You can now access a tracer object with streamlined options to trace functions and code blocks. The main two options are:
Using the decorator
@tracer.chain
traces the entire function automatically as a Span in Phoenix. The input, output, and status attributes are set based on the function's parameters and return value.Using the tracer in a
with
clause allows you to trace specific code blocks within a function. You manually define the Span name, input, output, and status.
12.09.2024: Sessions 💬
Available in Phoenix 7.0+
Sessions allow you to group multiple responses into a single thread. Each response is still captured as a single trace, but each trace is linked together and presented in a combined view.
11.18.2024: Prompt Playground 🛝
Available in Phoenix 6.0+
Prompt Playground is now available in the Phoenix platform! This new release allows you to test the effects of different prompts, tools, and structured output formats to see which performs best.
Replay individual spans with modified prompts, or run full Datasets through your variations.
Easily test different models, prompts, tools, and output formats side-by-side, directly in the platform.
10.01.2024: Improvements & Bug Fixes 🐛
We've made several performance enhancements, added new features, and fixed key issues to improve stability, usability, and efficiency across Phoenix.
Numerous stability improvements to our hosted Phoenix instances accessed on app.phoenix.arize.com
Added a new command to easily launch a Phoenix client from the cli:
phoenix serve
Implemented simple email sender to simplify dependencies
Improved error handling for imported spans
Replaced hdbscan with fast-hdbscan Added PHOENIX_CSRF_TRUSTED_ORIGINS environment variable to set trusted origins
Added support for Mistral 1.0
Fixed an issue that caused px.Client().get_spans_dataframe() requests to time out
09.26.2024: Authentication & RBAC 🔐
Available in Phoenix 5.0+
We've added Authentication and Rules-based Access Controls to Phoenix. This was a long-requested feature set, and we're excited for the new uses of Phoenix this will unlock!
The auth feature set includes:
Secure Access: All of Phoenix’s UI & APIs (REST, GraphQL, gRPC) now require access tokens or API keys. Keep your data safe!
RBAC (Role-Based Access Control): Admins can manage users; members can update their profiles—simple & secure.
API Keys: Now available for seamless, secure data ingestion & querying.
OAuth2 Support: Easily integrate with Google, AWS Cognito, or Auth0. ✉ Password Resets via SMTP to make security a breeze.
07.18.2024: Guardrails AI Integrations💂
Available in Phoenix 4.11.0+
Our integration with Guardrails AI allows you to capture traces on guard usage and create datasets based on these traces. This integration is designed to enhance the safety and reliability of your LLM applications, ensuring they adhere to predefined rules and guidelines.
07.11.2024: Hosted Phoenix 💻
Phoenix is now available for deployment as a fully hosted service.
With hosted instances, your data is stored between sessions, and you can easily share your work with team members.
We are partnering with LlamaIndex to power a new observability platform in LlamaCloud: LlamaTrace. LlamaTrace will automatically capture traces emitted from your LlamaIndex applications, and store them in a persistent, cloud- accessible Phoenix instance.
07.03.2024: Datasets & Experiments 🧪
Available in Phoenix 4.6+
Datasets: Datasets are a new core feature in Phoenix that live alongside your projects. They can be imported, exported, created, curated, manipulated, and viewed within the platform, and should make a few flows much easier:
Fine-tuning: You can now create a dataset based on conditions in the UI, or by manually choosing examples, then export these into csv or jsonl formats readymade for fine-tuning APIs.
Experimentation: External datasets can be uploaded into Phoenix to serve as the test cases for experiments run in the platform.
Experiments: Our new Datasets and Experiments feature enables you to create and manage datasets for rigorous testing and evaluation of your models. You can now run comprehensive experiments to measure and analyze the performance of your LLMs in various scenarios.
07.02.2024: Function Call Evaluations ⚒️
Available in Phoenix 4.6+
We are introducing a new built-in function call evaluator that scores the function/tool-calling capabilities of your LLMs. This off-the-shelf evaluator will help you ensure that your models are not just generating text but also effectively interacting with tools and functions as intended.
This evaluator checks for issues arising from function routing, parameter extraction, and function generation.
Last updated
Was this helpful?