Customer Lifetime Value
Overview of how to use Arize for customer lifetime value models
Last updated
Overview of how to use Arize for customer lifetime value models
Last updated
Copyright © 2023 Arize AI, Inc
Check out our Customer Lifetime Value Colab to see how you can leverage ML Observability for your models!
In just a few clicks, Arize automatically configures monitors that are best suited to your data to proactively detect drift, data quality, and performance issues.
Datasets: Training Version 1.0
Default Metric: RMSE
, Trigger Alert When: RMSE is above 80
Turn On Monitoring: Drift ✅, Data Quality ✅, Performance ✅
Visualize feature and model drift between various model environments and versions to identify LTV patterns and anomalous distribution behavior. Arize provides drift over time widgets overlaid with your metric of choice (in our case, RMSE) to clearly determine if drift is contributing to our performance degradation.
With the insights provided on Arize, you can deep dive into root causes and quickly gain intuitions, allowing for ML teams to quickly iterate, experiment, and ship new models in production.
Arize empowers automatic drill down on low performing slices (feature/value combinations) through the Feature Performance Heatmap.
We could pay close attention to feature/value combinations that could be indicative of model exploitations. In the case of LTV as stated in the colab, we used filters on the Feature Performance Heatmap to narrow down Fiber Optic
in Internet Service
and Yes
in Streaming TV
across Cities.
As we continue to check in and improve our model's performance, we want to be able to quickly and efficiently view all our important model metrics in a single pane. Use our Regression Model Performance Dashboard to set up a customizable dashboard for a single glance view of your model's important metrics.
In the case of LTV, we look at the model's RMSE, MAPE, and MAE values.
In only a few clicks we can visualize these metrics over time by creating a time series widget and overlaying three plots to showcase the fluctuation of the metrics over time.
By visualizing the feature drift and model performance, and understanding the features responsible, ML Engineers can gain additional information when troubleshooting model performance issues. Possible action items that could improve the model’s performance: Examining possible concept drifts relating to the features in question Retraining model to fit new distributions specific to this drift
Check out our Customer Lifetime Value Colab to see how you can leverage ML Observability for your models!