Arize Changelog
See the latest new features and bug fixes from the Arize team:
Last updated
Was this helpful?
See the latest new features and bug fixes from the Arize team:
Last updated
Was this helpful?
Access and manage your prompts in code with support for OpenAI and VertexAI.
Get full visbility into your evaluation task runs, including when it ran, what triggered it, and if there were errors.
Easily run your online evaluation tasks over historical data.
Dynamically select the fields you want to see in your sessions view.
You can now collapse rows to see more data at a glance or expand them to view more text.
Schedule for monitors to run hourly, daily, weekly, or monthly.
Support for HTTP when sending traces to Arize! See GitHub for more info.
We’ve added new ways to plot your charts, with custom colors and better UX!
See exactly how and when your monitors are triggered
Support for sessions
via LangChain native thread tracking in TypeScript is now available. Easily track multi-turn conversations / threads using LangChain.js.
Extract key insights quickly from your spans instead of trying to decipher meaning in hundreds of spans. Ask questions and run evals right in the trace view.
Building dashboard plots just got way easier. Create time series plots and even translate code into ready to go visualizations.
The Custom Metric skill now supports a conversational flow, making it easier for users to iterate and refine metrics dynamically
Experiment traces for a dataset are now consolidated accessed under "Experiment Projects".
For your multi-class ML models, you can see how your model is calibrated in one visualization
Users can now view a detailed breakdown of labels for their experiments on the Experiments Details page.
We've added full support for all available OpenAI models in the playground including the o1-mini
and o1-preview
.
We've added better input variable behavior, autocompletion enhancements, support for mustache/f-string input variables, and more.
We now store the last three filters used by a user! Users can easily access their filter history in the query filters dropdown, making it simpler to reuse filters for future queries.
Apply filters directly from the table by hovering over the text to reveal the filter icon.
Easily add spans to a dataset from the Traces page using the "Add to Dataset" button.
Quickly debug and refine your prompts used by your online evaluators by loading them prefilled into prompt playground.
Use Arize to annotate your data with 3rd parties.
Specify which columns of data you'd like to export when exporting data via the by specifying columns
.
You can now create datasets through many methods, from traces, code, manually in the UI, or CSV upload.
: Capture, process, and send audio data to Arize and observe your application behavior.
: Assess how well your models identify emotional tones like frustration, joy, or neutrality.
Manage, iterate, and deploy your prompts in one place. Version control your templates and use them across playground, tasks, and APIs.
to evaluate spans without requiring requests to an LLM-as-a-Judge. These include Regex matching, JSON validation, Contains keyword, and more!
Quickly experiment with your prompts across your datasets. All you have to do is click "Save as experiment"
You can now log experiment data manually using a dataframe, instead of running an experiment. This is useful if you already have the data you need, and re-running the query would be expensive.
Users can generate their desired metric by having copilot translate natural language descriptions or existing code (e.g., SQL, Python) into AQL.
Copilot now works for embeddings! Users can select embedding data point and Copilot will analyze for patterns and insights.
Local Explainability is now live, providing both a table view and waterfall style plot for detailed, per-feature SHAP values on individual predictions.
Visualize specific evaluations over time in dashboards.
Now users can follow the full from OpenAI and iterate on different functions in different messages from within the Prompt Playground.
User can now ingest traces created by the Vercel AI SDK into Arize.
You can add metadata and context that will be picked up by all of our auto instrumentations and added to spans.
Users now have the option to to test a task, such as online eval, by running it once on existing data, or apply evaluation labels to older traces.
Users can now filter experiments based on dataset attributes or experiment results, making it easy to identify areas for improvement and track their experiment progress with more precision.
With Embeddings Tracing, you can effortlessly select embedding spans and dive straight into the UMAP visualizer, simplifying troubleshooting for your genAI applications. →
We made it way simpler to add automatic tracing to your applications! It's now just a few lines of code to use OpenTelemetry to trace your LLM application. which uses our arize-otel package.