Explainability

How to Interpret SHAP

By default, the model Explainability tab will show the global feature importance values across all predictions within the specified time range. The global importance of a feature is measured by taking the mean of the absolute SHAP values for that feature across all samples in the evaluation window.

SHAP values don't have a fixed scale like probabilities that range between 0 and 1. Instead, the scale of SHAP values is determined by the output scale of the model being interpreted.

Here are some key principles to help you interpret SHAP values:

  • Magnitude: The magnitude of a SHAP value indicates the importance of a feature for a particular prediction. Larger magnitudes imply a greater influence on the model's output.

  • Output Scale Dependency: The scale of SHAP values is linked to the model's output.

    • For a classification model that outputs probabilities, SHAP values will typically be in the range [-1, 1] for binary classification.

    • For a regression model, the SHAP values will be on the scale of the target variable. For example, if you're predicting house prices ranging from $100k to $1M, the SHAP values will reflect that scale.

  • Summation to Model Output Difference: The SHAP values for an individual prediction sum up to the difference between the model's prediction for that sample and the mean prediction for the entire dataset. This property ensures that the SHAP values provide a fair allocation of the model's prediction among the features.

Analyzing Feature Importance Values

Global Feature Importance

The global explainability plot, often referred to as the SHAP summary plot, provides a comprehensive overview of the feature importance in your model. Here's a step-by-step guide to interpreting your global feature importance:

Feature Importance Ranking

The y-axis lists all the features, ranked by their importance in the model, from top to bottom.

The topmost feature has the highest average absolute SHAP value, making it the most influential in your model's predictions. In the plot below, merchant_ID is the most important feature.

Magnitude of SHAP Values

The x-axis represents the SHAP values. Features pushing the model's output to the right have a positive SHAP value, and vice versa.

The further a feature's SHAP value from zero, the more impact it has on the model's output. For instance, higher SHAP values of merchant_ID indicate a strong positive impact on the model's predictions.

Cohort Feature Importance

The dropdown filters at the top of the page allow you to understand the importance of your model's features across a cohort or subset of your predictions. The Cohort Feature Importance plot offers a refined perspective into feature importance, comparing the importance within a specific cohort to global metrics. Here’s a step-by-step guide on interpreting your cohort feature importance:

  1. Recognizing the Cohort and Global Metrics:

    • Cohort Explainability (Blue Line): Represents the SHAP values for features within a specific data cohort.

    • Global Explainability (Yellow Mark): Depicts the global SHAP value for each feature, providing a benchmark for comparison

  2. Comparative Analysis:

    • Features with Consistent Importance: If the blue line and the yellow mark for a feature are aligned or very close, this suggests that the feature's importance is consistent both within the cohort and globally.

    • Features with Divergent Importance: A significant difference between the blue line and yellow mark indicates that the feature has a different impact within the cohort compared to the global dataset. Such discrepancies might suggest potential biases, anomalies, or unique patterns within that cohort.

  3. Actionable Insights:

    • Tailored Feature Engineering: Features with divergent importance might require further investigation or adjusting the model to capture the cohort-specific dynamics effectively.

    • Model Adjustments: If certain features consistently show divergent importance across multiple cohorts, it may be worth re-evaluating the model or considering more granular models tailored to each cohort.

Compare Feature Importance

Compare two production datasets to easily visualize a change in feature importance between different datasets and versions.

Local Feature Importance

If you need per-prediction explainability: The ability to get an explanation for a single prediction based on a prediction ID lookup -- please reach out to your Arize support team for examples on enabling per-prediction visibility into your account.

Using Explainability to Troubleshoot Drift

On the model's Drift tab, sort feature drift by Prediction Drift Impact and Feature Importance.

Using Explainability to Troubleshoot Performance

On the model's performance tab, sort performance breakdown by Feature Importance.

Last updated

Copyright © 2023 Arize AI, Inc

#1912:

Change request updated