Release Notes
The latest from the Phoenix team.
Last updated
Was this helpful?
The latest from the Phoenix team.
Last updated
Was this helpful?
Available in Phoenix 8.30+
The Phoenix client now includes the SpanQuery
DSL for more advanced span querying. Additionally, a get_spans_dataframe
method has been added to facilitate easier data extraction for span-related information.
Available in Phoenix 8.29+
Phoenix now supports Transport Layer Security (TLS) for both HTTP and gRPC connections, enabling encrypted communication and optional mutual TLS (mTLS) authentication. This enhancement provides a more secure foundation for production deployments.
Available in Phoenix 8.28+
When stopping the Phoenix server via Ctrl+C
, the shutdown process now exits cleanly with code 0 to reflect intentional termination. Previously, this would trigger a traceback with KeyboardInterrupt
, misleadingly indicating a failure.
Available in Phoenix 8.27+
Improved trace navigation by automatically scrolling the selected span into view when a user navigates to a specific trace. This enhances usability by making it easier to locate and focus on the relevant span without manual scrolling.
Available in Phoenix 8.26+
Weโve released openinference-instrumentation-mcp
, a new package in the OpenInference OSS library that enables seamless OpenTelemetry context propagation across MCP clients and servers. It automatically creates spans, injects and extracts context, and connects the full trace across services to give you complete visibility into your MCP-based AI systems.
Big thanks to Adrian Cole and Anuraag Agrawal for their contributions to this feature.
Available in Phoenix 8.26+
Phoenix now supports programmatic API key creation through a new endpoint, making it easier to automate project setup and trace logging. To enable this, set the PHOENIX_ADMIN_SECRET
environment variable in your deployment.
Available in Phoenix 8.25+
Available in Phoenix 8.24+
This update enhances the Project Management API with more flexible project identification We've added support for identifying projects by both ID and hex-encoded name and introduced a new _get_project_by_identifier
helper function.
Available in Phoenix 8.23+
Available in Phoenix 8.22+
Available in Phoenix 8.21+
The new span aside moves the Span Annotation editor into a dedicated panel, providing a clearer view for adding annotations and enhancing customization of your setup. Read this documentation to learn how annotations can be used.
Available in Phoenix 8.20+
Newly added to the OpenAI Agent SDK is support for MCP Span Info, allowing for the tracing and extraction of useful information about MCP tool listings. Use the Phoenix OpenAI Agents SDK for powerful agent tracing.
Available in Phoenix 8.20+
Available in Phoenix 8.19+
Available in Phoenix 8.17+
You can now preconfigure admin users at startup using an environment variable, making it easier to manage access during deployment. Admins defined this way are automatically seeded into the database and ready to log in.
Available in Phoenix 8.16+
You can now delete experiments directly from the action menu, making it quicker to manage and clean up your workspace.
Available in Phoenix 8.15+
Available in Phoenix 8.14+
We've added the ability to resize Span, Trace, and Session tables. Resizing preferences are now persisted in the tracing store, ensuring settings are maintained per-project and per-table.
Available in Phoenix 8.13+
Available in Phoenix 8.11+
You can now save and load configurations directly from prompts or default model settings. Additionally, you can adjust the budget token value and enable/disable the "thinking" feature, giving you more control over model behavior and resource allocation.
Available in Phoenix 8.9+
Prompt Playground now supports new GPT and Anthropic models new models with enhanced configuration options. Instrumentation options have been improved for better traceability, and evaluation capabilities have expanded to cover Audio & Multi-Modal Evaluations. Phoenix also introduces new integration support for LiteLLM Proxy & Cleanlabs evals.
Available in Phoenix 8.8+
Weโve rolled out several enhancements to Projects, offering more flexibility and control over your data. Key updates include persistent column selection, advanced filtering options for metadata and spans, custom time ranges, and improved performance for tracing views. These changes streamline workflows, making data navigation and debugging more efficient.
Available in Phoenix 8.0+
Phoenix prompt management will now let you create, modify, tag, and version control prompts for your applications. Some key highlights from this release:
Versioning & Iteration: Seamlessly manage prompt versions in both Phoenix and your codebase.
New TypeScript Client: Sync prompts with your JavaScript runtime, now with native support for OpenAI, Anthropic, and the Vercel AI SDK.
New Python Client: Sync templates and apply them to AI SDKs like OpenAI, Anthropic, and more.
Standardized Prompt Handling: Native normalization for OpenAI, Anthropic, Azure OpenAI, and Google AI Studio.
Enhanced Metadata Propagation: Track prompt metadata on Playground spans and experiment metadata in dataset runs.
Available in Phoenix 8.0+
Phoenix has made it even simpler to get started with tracing by introducing one-line auto-instrumentation. By using register(auto_instrument=True)
, you can enable automatic instrumentation in your application, which will set up instrumentors based on your installed packages.
Available in Phoenix 7.9+
In addition to using our automatic instrumentors and tracing directly using OTEL, we've now added our own layer to let you have the granularity of manual instrumentation without as much boilerplate code.
You can now access a tracer object with streamlined options to trace functions and code blocks. The main two options are using the decorator @tracer.chain
and using the tracer in a with
clause.
Available in Phoenix 7.0+
Sessions allow you to group multiple responses into a single thread. Each response is still captured as a single trace, but each trace is linked together and presented in a combined view.
Available in Phoenix 6.0+
Prompt Playground is now available in the Phoenix platform! This new release allows you to test the effects of different prompts, tools, and structured output formats to see which performs best.
Replay individual spans with modified prompts, or run full Datasets through your variations.
Easily test different models, prompts, tools, and output formats side-by-side, directly in the platform.
Available in Phoenix 5.0+
We've added Authentication and Rules-based Access Controls to Phoenix. This was a long-requested feature set, and we're excited for the new uses of Phoenix this will unlock!
Available in Phoenix 4.11.0+
Our integration with Guardrails AI allows you to capture traces on guard usage and create datasets based on these traces. This integration is designed to enhance the safety and reliability of your LLM applications, ensuring they adhere to predefined rules and guidelines.
Phoenix is now available for deployment as a fully hosted service.
We are partnering with LlamaIndex to power a new observability platform in LlamaCloud: LlamaTrace. LlamaTrace will automatically capture traces emitted from your LlamaIndex application.
Available in Phoenix 4.6+
Datasets: Datasets are a new core feature in Phoenix that live alongside your projects. They can be imported, exported, created, curated, manipulated, and viewed within the platform, and make fine-turning and experimentation easier.n
Available in Phoenix 4.6+
Tool call and result IDs are now shown in the span details view. Each ID is placed within a collapsible header and can be easily copied. This update also supports spans with multiple tool calls. Get started with tracing your tool calls .
This release introduces a REST API for managing projects, complete with full CRUD functionality and access control. Key features include CRUD Operations and Role-Based Access Control. Check out our to test these features.
Weโve added support for Prompt Tagging in the Phoenix client. This new feature gives you more control and visibility over your prompts throughout the development lifecycle. Tag prompts directly in code, label prompt versions, and add tag descriptions. Check out documentation on .
You can now toggle the option to treat orphan spans as root when viewing your spans. Additionally, we've enhanced the UI with an icon view in span details for better visibility in smaller displays. Learn more .
Within each project, there is now a Config tab to enhance customization. The default tab can now be set per project, ensuring the preferred view is displayed. Learn more in .
In the New Project tab, we've added quick setup to instrument your application for BeeAI, SmolAgents, and the OpenAI Agents SDK. Easily configure these integrations with streamlined instructions. Check out all Phoenix here.
We've introduced the OpenAI Agents SDK for Python which provides enhanced visibility into agent behavior and performance. For more details on a quick setup, check out our .
Check out docs for more.
Check out the docs and this for more on prompts!๐
Check out the for more on how to use tracer objects.
Sessions make it easier to visual multi-turn exchanges with your chatbot or agent Sessions launches with Python and TS/JS support. For more on sessions, check out and the .
Automatically capture traces as Experiment runs for later debugging. See for more information on Prompt Playground, or jump into the platform to try it out for yourself.
The auth feature set includes secure access, RBAC, API keys, and OAuth2 Support. For all the details on authentication, view our .
Check out the here.
In addition to our existing notebook, CLI, and self-hosted deployment options, weโre excited to announce that Phoenix is now available as a . With hosted instances, your data is stored between sessions, and you can easily share your work with team members.
Hosted Phoenix is 100% free-to-use, !
For more details on using datasets see our or .
Experiments: Our new Datasets and Experiments feature enables you to create and manage datasets for rigorous testing and evaluation of your models. Check out our full .
We are introducing a new built-in function call evaluator that scores the function/tool-calling capabilities of your LLMs. This off-the-shelf evaluator will help you ensure that your models are not just generating text but also effectively interacting with tools and functions as intended. Check out a .