Model Explainability

The Arize platform can help you understand why your model produced its predictions.

Sending Feature Importance

Arize supports 2 methods for ingesting and visualizing feature importance

Analyzing Feature Importance Values

Global Feature Importance

By default, the model Explainability tab will show the global feature importance values across all predictions within the specified time range.

Cohort Feature Importance

The dropdown filters at the top of the page allow you to understand the importance of your model's features across a cohort or subset of your predictions.
Select a cohort of predictions using the model version, feature and prediction label filters:

Compare Feature Importance

Compare two production datasets to easily visualize a change in feature importance between different datasets and versions.

Local Feature Importance

If you need per-prediction explainability: The ability to get an explanation for a single prediction based on a prediction ID lookup -- please reach out to your Arize support team for examples on enabling per-prediction visibility into your account.

Using Explainability to Troubleshoot Drift

On the model's Drift tab, sort feature drift by Prediction Drift Impact and Feature Importance.

Using Explainability to Troubleshoot Performance

On the model's performance tab, sort performance breakdown by Feature Importance.

Additional Resources

Questions? Email us at [email protected] or Slack us in the #arize-support channel