09.27.2021
New Releases, Enhancements, Changes + Arize in the News!
What's New
Project Performance Tab
Find easy access to an overview of Accuracy and MAPE on the Project Page! Based on model type, numeric models show a MAPE value, while all other models will populate Accuracy. Otherwise, find a handy tooltip to prompt you to log your actuals.
Log Loss Monitors
A new performance monitor metric! An important classification metric to better understand how your model compares, you can now track Log Loss found in Performance Monitors and in the Drift Tab.
New metric: PR AUC
Create monitors to track Precision-Recall, Area Under the Curve on your scored models. Find PR AUC on Performance Monitor, Statistic Widgets, and Timeseries Widgets
Enhancements
Bulk Deleting & Muting Monitors
You know that feeling of wanting to get rid of everything all at once? Yeah, us too. Now, you can do just that with our new Bulk Deleting feature (or Bulk Muting)! Found in the Model Overview Monitors tab, make auto-monitoring your models that much easier.
Filtered Baselines
A new and improved version of model baseline configurations! Monitor your model drift at a higher cadence and in a more maintainable way via dynamic baseline timeframes.
Scored Categorical Models
If you have a model that outputs both a prediction label and a prediction score, you can now send records to Arize with an actual score along with an actual label to visualize and monitor both classification and regression performance metrics on the same model. No more need to create two separate classification and regression models in Arize!
Python SDK v3.0.0
New python SDK submodule intended to simplify and improve the Efficiency of uploads of pandas dataframes. Dataframes are bulk serialized and upload as a single file, reducing memory overhead and making it possible to upload millions of datapoints per minute, an over 50x improvement on previous versions.
In the News
A Look into Global, Cohort, and Local Model Explainability
As AI/ML revolutionizes industries and changes how we work and play, model explainability takes on an elevated importance. Yet, the ability to introspect and understand why a model made a particular prediction has become more and more difficult as models have become more complex. Learn about global, cohort, and local model explainability and how to use explainability in your ML lifecycle! Read the Article
Move Fast Without Breaking Things in ML
Written in collaboration with Bob Nugman, ML Engineer at DoorDash, we explore the importance of implementing reliability engineering for ML initiatives. This piece outlines three pillars of reliability: observability, management of change, and incidence response to create a systematic reliability program. Learn how to discover the root cause of production issues, come up with a solution, and enable ML reliability in your production products. Read the Article
ML Observability 101: How To Make Your Models Work IRL - Webinar
ML Observability helps you eliminate the guesswork in production and deliver continuous model improvements. Learn how to:
Use statistical distance checks to monitor features and model input in the production
Analyze performance regressions such as drift and how it impacts business metrics
Use troubleshooting techniques to determine if issues are model or data related - Watch the webinar
The Rise of the ML Engineer: Alex Zamoshchin
As a continuation of our "Rise of the ML Engineer Series," we recently chatted with Alex Zamoshchin from Lyft to understand how the ever-developing role of the ML engineer is evolving to meet the needs of Lyft's critical ML initiatives. Learn more about how ML Engineers fit across an ML Team at Lyft, and their role in putting models into production. Read the Article
Last updated