02.14.2025
New Releases, Enhancements, + Changes
Last updated
Was this helpful?
New Releases, Enhancements, + Changes
Last updated
Was this helpful?
Users can now schedule when monitors run. Users can configure their monitors to run:
Hourly & Daily: Select specific days of the week.
Daily, Weekly & Monthly: Runs at 12 AM UTC after creation.
Default Behavior: Monitors will continue running every 3 hours, 7 days a week unless configured otherwise.
Users can now export only the columns they care about for large datasets, reducing SDK export time by up to 95%.
Specify which columns of data you'd like to export when exporting data via the ArizeExportClient
When using the export_model_to_df
function, users can specify the columns
parameter to only export specific columns.
Users can now upload CSVs as a dataset in Arize. Columns in the file will be attributes that users can access in Experiments or in Prompt Playground. Learn more →
We’ve made some updates to make monitors more organized, searchable, and user-friendly. Here’s what’s new:
Cardless Design – A sleek, modern table view for better readability.
Project-Level Monitors – LLM and ML monitors now have separate tabs.
Search & Sort – Find monitors by name or dimension, plus sort by any column.
Summary Stats – See how many monitors triggered in the last 24 hours
New LLM Monitor Types – Clearer categories:
Custom Metric Monitor → Performance Monitor with a custom metric preselected.
Span Property Monitor → Data Quality Monitor for span properties.
Evaluation Monitor → Data Quality Monitor for evaluations.
Quick Monitor for Errors – Easily enable error count monitoring (count, status_code = ERROR).
We've added support for HTTP protocol when sending traces to Arize through an OTEL tracer.
To use: Specify /v1/traces
as the endpoint and Transport.HTTP
as the transport in our register
helper
The latest video tutorials, paper readings, ebooks, self-guided learning modules, and technical posts:
💯 How 100X AI Uses Phoenix to Supercharge AI-Driven Troubleshooting
⚙️ Multiagent Finetuning: A Conversation with Researcher Yilun Du