Interpreting & Analyzing Feature Importance Values
How to Interpret SHAP
By default, the model Explainability tab will show the global feature importance values across all predictions within the specified time range. The global importance of a feature is measured by taking the mean of the absolute SHAP values for that feature across all samples in the evaluation window.
SHAP values don't have a fixed scale like probabilities that range between 0 and 1. Instead, the scale of SHAP values is determined by the output scale of the model being interpreted.
Here are some key principles to help you interpret SHAP values:
Magnitude: The magnitude of a SHAP value indicates the importance of a feature for a particular prediction. Larger magnitudes imply a greater influence on the model's output.
Output Scale Dependency: The scale of SHAP values is linked to the model's output.
For a classification model that outputs probabilities, SHAP values will typically be in the range [-1, 1] for binary classification.
For a regression model, the SHAP values will be on the scale of the target variable. For example, if you're predicting house prices ranging from $100k to $1M, the SHAP values will reflect that scale.
Summation to Model Output Difference: The SHAP values for an individual prediction sum up to the difference between the model's prediction for that sample and the mean prediction for the entire dataset. This property ensures that the SHAP values provide a fair allocation of the model's prediction among the features.
Analyzing Feature Importance Values
Global Feature Importance
The global explainability plot, often referred to as the SHAP summary plot, provides a comprehensive overview of the feature importance in your model. Here's a step-by-step guide to interpreting your global feature importance:
Feature Importance Ranking
The y-axis lists all the features, ranked by their importance in the model, from top to bottom.
The topmost feature has the highest average absolute SHAP value, making it the most influential in your model's predictions. In the plot below, merchant_ID is the most important feature.
Magnitude of SHAP Values
The x-axis represents the SHAP values. Features pushing the model's output to the right have a positive SHAP value, and vice versa.
The further a feature's SHAP value from zero, the more impact it has on the model's output. For instance, higher SHAP values of merchant_ID indicate a strong positive impact on the model's predictions.
Cohort Feature Importance
The dropdown filters at the top of the page allow you to understand the importance of your model's features across a cohort or subset of your predictions. The Cohort Feature Importance plot offers a refined perspective into feature importance, comparing the importance within a specific cohort to global metrics. Here’s a step-by-step guide on interpreting your cohort feature importance:
Recognizing the Cohort and Global Metrics:
Cohort Explainability (Blue Line): Represents the SHAP values for features within a specific data cohort.
Global Explainability (Yellow Mark): Depicts the global SHAP value for each feature, providing a benchmark for comparison
Comparative Analysis:
Features with Consistent Importance: If the blue line and the yellow mark for a feature are aligned or very close, this suggests that the feature's importance is consistent both within the cohort and globally.
Features with Divergent Importance: A significant difference between the blue line and yellow mark indicates that the feature has a different impact within the cohort compared to the global dataset. Such discrepancies might suggest potential biases, anomalies, or unique patterns within that cohort.
Actionable Insights:
Tailored Feature Engineering: Features with divergent importance might require further investigation or adjusting the model to capture the cohort-specific dynamics effectively.
Model Adjustments: If certain features consistently show divergent importance across multiple cohorts, it may be worth re-evaluating the model or considering more granular models tailored to each cohort.
Compare Feature Importance
Compare two production datasets to easily visualize a change in feature importance between different datasets and versions.
Local Feature Importance
View the local explainability for any prediction by clicking into the row and viewing prediction details.
Our Local Explainability Plot breaks down how each feature contributes to a specific prediction. Starting from the model’s base value (average prediction across all instances), each feature’s SHAP value is shown independently, with all features starting from this base. Only the top 10 contributors are displayed for clarity, while any remaining features are combined under an "Other Contributions" bar.
Using Explainability to Troubleshoot Drift
On the model's Drift tab, sort feature drift by Prediction Drift Impact and Feature Importance.
Using Explainability to Troubleshoot Performance
On the model's performance tab, sort performance breakdown by Feature Importance.
Last updated