The Arize platform can help you understand why your model produced its predictions.
Arize supports 2 methods for ingesting and visualizing feature importance
By default, the model Explainability tab will show the global feature importance values across all predictions within the specified time range.
The dropdown filters at the top of the page allow you to understand the importance of your model's features across a cohort or subset of your predictions.
Select a cohort of predictions using the model version, feature and prediction label filters:
Compare two production datasets to easily visualize a change in feature importance between different datasets and versions.
If you need per-prediction explainability: The ability to get an explanation for a single prediction based on a prediction ID lookup -- please reach out to your Arize support team for examples on enabling per-prediction visibility into your account.
On the model's Drift tab, sort feature drift by Prediction Drift Impact and Feature Importance.
On the model's performance tab, sort performance breakdown by Feature Importance.