05.09.2022

New Releases, Enhancements, Changes + Arize in the News!

What's New

Automatic Thresholds for Monitors

The Arize platform now automatically populates monitoring thresholds for both Drift Monitors and Performance Monitors. A monitor's threshold is the value that is compared against your model's current calculated metric value. Thresholds are used to trigger an alert when the current value of a metric is either above or below a model's threshold value.

Automatic thresholds help ML teams scale their ML needs, reduce time to resolution, and increase overall workflow efficiency.

Drift Monitors

Arize sets automatic drift thresholds for both prediction drift and feature drift. An automatic threshold is determined when there is sufficient production data to determine a trend.

Learn more about automatic baselines here, drift monitors here, and how automatic thresholds for drift monitors are calculated here.

Performance Monitors

Arize sets an automatic threshold for performance monitors when there is sufficient production data to determine a trend. This capability intelligently alerts ML teams when the performance metric of your choosing is not behaving as expected.

Learn more about automatic baselines here, performance tracing here, and how automatic thresholds for performance monitors are calculated here.

Enhancements

Additional Object Store Support - Google Cloud Storage

Arize users can now automatically upload model inference data to the Arize platform via Google Cloud Storage. With this addition, Arize users can use the File Importer feature to easily ingest their data directly from GCS.

Learn more about our File Importer and supported Object Stores here.

New Performance Metric: Weighted Average Percentage Error (WAPE)

We've added a new accuracy metric, Weighted Average Percentage Error -- also known as MAD/Mean ratio -- for more comprehensive performance tracing. WAPE is useful when your model is prone to outlier events as its calculations are based on absolute error instead of squared error.

Learn how to calculate WAPE here.

In the News

Arize AI Launches Bias Tracing, a Tool for Uprooting Algorithmic Bias

In today’s world, it has become all too common to read about AI acting in discriminatory ways -- often with tragic consequences. Thus, we launched Bias Tracing, a tool designed to help monitor and take action on model fairness metrics. Arize Bias Tracing enables teams to make multidimensional comparisons, uncovering the features and cohorts contributing to algorithmic bias in production without time-consuming SQL querying or painful troubleshooting workflows. Learn more here.

How To Know When It’s Time To Leave Your Big Tech Software Engineering Job

In “How To Know When It’s Time To Leave Your Big Tech Software Engineering Job,” Arize AI founding engineer and Forbes 30 Under 30 honoree Tsion Behailu shares why she bet on Arize (and herself) after nearly five years at Google — and couldn’t be happier. Read more.

Building the Future of AI-Powered Retail Starts With Trust

“If customers don’t trust the model, it’s useless.” So says Jiazhen Zhu, Senior Data Engineer / Machine Learning Engineer and Tech Lead at Walmart Global Tech, who doesn’t pull any punches in this wide-ranging interview on MLOps, leadership, and the importance of ML monitoring and explainability. Read it here.

The Rise of AI Risk Disclosure

Three years ago, Alphabet and Microsoft made waves when they disclosed the use of AI as a potential risk factor in their annual financial reports. Given the rapid growth of AI in nearly every industry since then, it’s worth asking: how many companies followed their lead? This brief report from Arize AI outlines:

  • The growth in AI risk disclosure by industry

  • Examples of AI risk disclosures and responsible AI approaches from Fortune 500 companies

  • Recommendations on what executives should consider when assessing an AI risk management and disclosure strategy

Download the White Paper here.

Last updated