04.25.2022
New Releases, Enhancements, Changes + Arize in the News!
What's New
Analysis of Fairness Metrics
You now have the option to analyze fairness metrics for your models in Arize's platform. Gain insights into whether your model is generating potentially biased or unfair outcomes for segments of interest using three fairness metrics: Recall Parity, False Positive Rate Parity, and Disparate Impact. Learn more about using this feature here.
Enhancements
Tag Filters Now Available for Monitors
Ever wanted to drill down on your drift, performance, or data quality monitors using tag metadata? Users can now further tailor monitors to focus on specific tags that matter most to your analysis. Check out the functionality under monitor creation and editing.
Dashboarding with Tags
Users can now create any type of dashboard with tag metadata, or filter existing feature analysis and model performance dashboards by tags. This enhancement enables you to build more granular dashboards for easier at-a-glance updates for different stakeholders.
In the News
Arize AI Announces SOC 2 Type II Certification
Arize AI is officially SOC2 Type II Certified! As said best by our Chief Information Security Officer, Remi Cattiau “Our SOC2 Certification is a validation of Arize AI’s security strategy, but it’s really just the beginning. Realizing Arize’s mission of making AI work and work for the people necessarily starts with putting security and privacy at the heart of everything we do.” Read more about the certification and what it means in the press release.
Insights From the Front Lines of Building Feature Engineering Infrastructure
Arize’s latest “Rise of the ML Engineer” interview features Thomas Huang, Software Engineer of Machine Learning Infrastructure at LinkedIn. In a wide-ranging Q&A, Huang offers career advice to graduating students and discusses Feathr— LinkedIn’s recently open-sourced feature store for productive ML — as well as lessons learned from trying to build active learning as a service. Read more.
The Next Evolutionary Step In Model Performance Management
Machine learning troubleshooting is painful and time-consuming today, but it doesn’t have to be. This paper charts the evolution that ML teams go through — from no monitoring to monitoring to full-stack ML observability — and offers a modernization blueprint for teams to implement ML performance tracing to solve problems faster. Download here.
Last updated