New Releases, Enhancements, Changes + Arize in the News!
Business Impact through User-defined Functions
Want direct feedback on how your model performance is impacting your business? With the new Business Impact Tab, you can now define custom functions to describe business KPIs relative to raw confusion matrix values. By enabling you to describe the “business impact” of the model’s output, you are able to quickly make business decisions (changing thresholds to maximize profit, mitigate loss etc.) using the Arize platform.
Another page added this release is the Drift Tab. Quickly access prediction drift information through this new tab, rather than going through a drift monitor. This view allows you to easily see the prediction drift of the current model over time as well as a comparison between the current and baseline distributions.
AUC Now on Dashboards!
One of the most important performance metrics for classification models is here, Area Under the Curve. AUC provides an aggregate measure of performance across all possible classification thresholds - more can be read about it here. To use this new feature, add a timeseries widget, as you would for any other performance metric, and choose AUC as the "chart metric."
Click into a Feature, Prediction, or Actual to See Data Quality Metrics
We now enable users to do a deep dive on features, predictions and actuals on a model. Users can see valuable information such as percent empty over time, quantiles, and cardinality, allowing the platform to surface up critical issues to indicate the health of a model. Additionally, it is tied in to the model health page, which allows for earlier notification of any data quality issues.
Beyond Monitoring: The Rise of Observability
One question that comes up a lot in the ML space is: “I already monitor my data. Why do I need observability, too?” We wrote an article in collaboration with Monte Carlo about how in issues of model performance, observability helps you get to the bottom of why these issues are occurring. To discover why an observability platform should be used for every model in production, read the post.
What is ML Observability?
ML observability is a crucial piece of the pipeline that up until recently has only been utilized by the large Googles and Facebooks of the world. We wrote an article explaining its importance, as well as how ML observability achieved through the application of an evaluation store can help your team throughout the whole process of validating, monitoring, troubleshooting, and improving your models. Read the post