Bias Tracing (Fairness)

Navigate to the Fairness tab on your model to identify and ensure that model bias issues across sensitive attributes such as race and sex are caught before broadly impacting marginalized groups.

Bias Tracing Overview

Arize Bias Tracing helps teams analyze and take action on fairness metrics. The solution enables teams to make multidimensional comparisons, automatically surfacing and quickly uncovering the features and cohorts likely contributing to algorithmic bias.

Troubleshooting Model Bias

To evaluate how your model is behaving on any protected attribute, you select a sensitive group (e.g. Asian) and a base group (e.g. all other values – African American, LatinX, Caucasian, etc.) along with a fairness metric, you can begin to see whether a model is biased against a protected group using the four-fifths (⅘) rule. The four-fifths rule is a threshold that is used by regulatory agencies like the United States Equal Employment Opportunity Commission to help in identifying adverse treatment of protected classes. Leveraging the four-fifths rule, teams can measure whether their model falls outside of the 0.8-1.25 threshold, which means algorithmic bias may be present in their model.

To evaluate model bias, navigate to the Fairness tab and select inputs to evaluate on.

Setup

Metrics

  • Recall Parity: measures how "sensitive" the model is for one group compared to another, or the model’s ability to predict true positives correctly.

  • False Positive Rate Parity: measures whether a model incorrectly predicts the positive class for the sensitive group as compared to the base group.

  • Disparate Impact: a quantitative measure of the adverse treatment of protected classes

Attribute: any of your model's protected categorical features (e.g. income class, race, sex)

Base Group: unprotected group (e.g. Caucasian), will be used in understanding parity against sensitive group

Sensitive Group: protected group (e.g. Black), group you are evaluating to see if algorithmic bias is present for

You can select an individual base and sensitive group or multiple values for each group, depending on your teams' goals.

Fairness Over Time

Once you have selected your metric, attribute, base, and sensitive groups, you will be presented with a visualization of your model's fairness metric over time, along with a stacked histogram displaying your model's overall traffic, traffic for the base group, and traffic for sensitive group.

The traffic histogram allows you to better understand if your fairness metric is impacted by a lack of adequate representation of a sensitive class (or overrepresentation by a base class) relative to the total traffic.

To zoom in to a particular time range where bias may have been higher or lower, drag/highlight your cursor over a section of the Fairness over Time chart.

4/5ths Rule

To understand the fairness metric value for the period of time you are evaluating, many companies use the four-fifths rule.

The 4/5ths rule is a threshold that is used by regulatory agencies like the United States Equal Employment Opportunity Commission to help in identifying adverse treatment of protected classes.

When leveraging the four-fifths rule, you can measure whether your model falls outside of the 0.8-1.25 threshold, which means algorithmic bias against the selected sensitive group may be present in your model.

Fairness Breakdown

Scrolling below the Fairness over Time chart, you will see the Fairness Breakdown, by Features or Tags.

By clicking on the caret next to a listed feature, you can dig even deeper to see which segment within each feature or tag of the model is contributing to the bias of your model.

Each bar represents the Fairness metric for base and sensitive groups calculated only on a subsection of your dataset where the segment is true.

The darker red a segment is, the more bias is present. By scrolling through you can easily uncover the problematic segments where bias may be more present, and take action.

Compare Bias Across Datasets

To compare fairness and bias across model versions or environments (e.g. training vs. production) and evaluate which model performs better for certain groups. you add click on "Add Comparison" and add an additional dataset. This can help answer questions such as "Were we seeing this bias in training?" or "Does my latest model version exhibit more or less bias than the last?"

Add Filters

You can layer additional filters across features, tags, prediction scores/values, and actual scores/values to evaluate your model's bias on a more granular level and drill down into where the issue may be stemming from.

To quickly add a problematic segment as a filter for deeper troubleshooting, you can apply a it as a filter directly from the Fairness Breakdown.

Additional Resources

Last updated

Copyright © 2023 Arize AI, Inc