New Releases, Enhancements, Changes + Arize in the News!


Model Schema API

Query a model's schema through our public-facing GraphQL API. This allows users to programmatically paginate through a model's features, tags, predictions, and actuals, unlocking the ability to dynamically create monitors that track these values over time.

An example of this functionality can be found in the Create Data Quality Monitor Colab. Learn more about the Model Schema API here, and more about our GraphQL API here.

Quick Start Guide

The quick start guide gives you a brief intro to core Arize workflows. Re-launch the quickstart guide by clicking the rocketship icon on the bottom left of any page to check out the new short guides. Walk through key workflows for bias tracing, embeddings troubleshooting, performance tracing, and more.

Performance Monitor Metric: MASE

Use MASE as your evaluation metric for forecasting models to understand your model performance better. MASE is recommended to determine the comparative accuracy of forecasts.

Learn how to calculate MASE here.

Drift Monitor Metric: KL Divergence

Choose between PSI and KL Divergence when measuring drift. Use KL divergence if you have a distribution with a high variance.

Learn more about KL divergence here.

Data Quality Monitor Metrics: Percentiles

Evaluate P50, P95, and P99 for data quality monitors to gain a more representative understanding of both your median and outlier data performance.

Learn more about data quality monitors here.

In The News

Arize + Hugging Face = Better Performance, Lower Costs for Unstructured Models

Arize AI and Hugging Face are partnering to help organizations train unstructured models and monitor and troubleshoot those models in production, lowering costs and maximizing performance.

Learn more about challenges with NLP models, follow along with a code example on obtaining embeddings from a transformer model, and see how Arize and Hugging Face can help improve your unstructured data workflows in this informative post.

Case Study: ShareChat's Machine Learning Team Grows Engagement, Inclusivity

ShareChat is a social media giant with over 400 million monthly active users and over 200 models in production spanning an array of use cases from click through rate to NLP. Since deploying Arize, ShareChat’s monetization AI team reports benefits that include:

  • Hundreds of extra hours freed up per year across the team

  • A payback period of under a year; >100% ROI

  • Improved model performance from proactively surfacing feature drift and performance impact score at a cohort-level

  • Robust drift monitoring for structured data, with the plans to implement embedding drift monitoring for NLP models

  • Immediate visibility when issues arise based on predefined and automated thresholds, maximizing internal visibility and speeding up mean time-to-resolution

Interview: Cerebral's Michael Stefferson

Michael Stefferson, Staff Machine Learning Engineer at Cerebral, discusses his career and the unique challenges of deploying effective models in telemental health in this interview with Aber Roberts, Machine Learning Engineer at Arize.

Last updated