Spell

Integrating Arize with model serving and tooling platform, Spell

Arize helps you visualize your model performance, understand drift & data quality issues, and share insights learned from your models. Spell is an end-to-end ML platform that provides infrastructure for company to deploy and train models.

Read more about the platforms on our partnership announcement.

You can either work through on Colab, or follow the steps below for your own model!

Step 1: Logging into spell via command line.

$ spell login

Step 2: Train and create model with spell.

$ spell run \
    --github-url https://github.com/spellml/examples \
    --machine-type cpu \
    --mount public/tutorial/churn_data/:/mnt/churn_prediction/ \
    --pip arize --pip lightgbm \
    -- python arize/train.py

Step 3: Add your Arize API_KEY and SPACE_ID to serve_async.py and server_sync.py. You can find your Arize credential details here

Step 4: Creating your model your model and serving it.

$ spell model create churn-prediction 'runs/$RUN_ID'
$ spell server serve \
    --node-group default \
    --min-pods 1 --max-pods 3 \
    --target-requests-per-second 100 \
    --pip lightgbm --pip arize \
    --env ARIZE_SPACE_ID=$ARIZE_SPACE_ID \
    --env ARIZE_API_KEY=$ARIZE_API_KEY \
    churn-prediction:v1 serve_sync.py  # or serve_async.py

Step 5: Test your working instance, send in some data, and see that your model is observable on Arize.

$ curl -X POST -d '@test_payload.txt' \
    https://$REGION.$CLUSTER.spell.services/$SPACE/churn-prediction/predict

Last updated

Copyright © 2023 Arize AI, Inc