Search
K
Links

Timeseries Widgets

How to use a timeseries widget on a dashboard

Overview

The time series widget supports graphing both model evaluation metrics and data metrics over time. Timeseries widgets display both hourly and daily data.

Use Cases

  • Classification Evaluation Metrics Daily or Hourly
    • Accuracy, Recall, F1, Precision, False Positive Rate, etc...
  • Numeric Model Evaluation Metrics Daily or Hourly
    • RMSE, MAE, MAPE, etc...
  • Model Data Metrics Daily/Hourly
    • Feature values
      • Categorical: Count of a specific feature
      • Numeric: Average, P95, P5
      • Count/Total (%)
    • Prediction Values
      • Precent Error: Predictions based on Percent Error Threshold
      • Categorical: Count of a specific feature
      • Numeric: Average, P95, P5
      • Count/Total (%)

Data Metrics

This type of chart tracks model data over a period of time, looking at the count of predictions versus actuals. Any timeseries widget supports adding additional plots and the facet/slice filtering.
For Example: Let's add to the plot above an additional line chart showing count of predictions when the feature fico_score is > 500
Time Series Model Data
In order to add a plot to a widget click the edit button on the drop down from the widget:
Edit Widget
Once in edit mode, the plots are in the form and if you scroll down you will find an add plot button:
The image below shows a new plot being added that is the count of predictions where feature fico_score is greater than 500:
New Plot
Here's the prediction count of the model conditioned on fico_score:
Prediction Count: Conditioned on a Feature "fico_score" > 500

Evaluation Metrics

Evaluation metrics are available for every model depending on the type. The platform supports evaluation metrics based on the type of model.
Below is an example of an accuracy metric over a specific time period:
Accuracy Above

Widget Configration

  • Evaluation Metrics: Accuracy, RMSE, MSE, MAPE
  • Cohort analysis of Evaluation metrics slicing on any facet
  • Any environment
    • Production: Performance in production
    • Training/Validation: Plot point per training/validation run
  • User Defined Functions (UDFs) - Coming soon
Evaluation Selection for Classification
Configuration support for this widget type includes:
  • Setting the model version
  • Model Environment: Production, Training or Validation
  • Evaluation Metric
  • Slice/Facet Features
Example Configuration for Evaluation Metric
Questions? Email us at [email protected] or Slack us in the #arize-support channel