Azure Blob Storage
Set up an import job to ingest data into Arize from Azure
Set up an import job to log inference files to Arize. Users generally find a sweet spot around a few hundred thousand to a million rows in each file, with the total file limit being 1GB.
Create a blob storage container and folder (optional) where you would like Arize to pull your model's inferences.
For example you might set up a container named
bucket1
and folder /click-thru-rate/production/v1/
that contains CSV files of your model inferences.In this example, your bucket name is
bucket1
and your prefix is click-thru-rate/production/v1/
There are multiple ways to structure model data. To easily ingest model inference data from storage, adopt a standardized directory structure across all models.
Follow the steps to download the Azure CLI: https://learn.microsoft.com/en-us/cli/azure/install-azure-cli
Add the Arize Service Principal by referencing our application id:
az ad sp create --id eb6cb4d2-f42d-4ef6-bacb-2417d3086e47
Azure Portal
Azure CLI
Find the storage account name that your container is created under, and click "Access Control"

Go to "Role Assignments" and click "Add"

Search for "Storage Blob Data Reader" and click on it

Click "Next" and check "Assign access to: User, group, or service principal". Click on "Select Members" and search for "Arize".

Click on "Review + Assign"

Ensure our Service Principal appears as having the "Storage Blob Data Reader" role

Run the following azure CLI command:
Note the following environment variable substitutions
${OBJECT_ID}
: The object ID returned from creating the Arize Service Principal (not the same as the application ID, and will be unique for your account)

${YOUR_SUBSCRIPTION_ID}
: The Azure subscription ID for your storage account.${YOUR_RESOURCE_GROUP}
: The resource group your storage account resides in.

${YOUR_STORAGE_ACCOUNT_NAME}
: The storage account name
az role assignment create \
--role "Storage Blob Data Reader" \
--assignee-object-id ${OBJECT_ID} \
--assignee-principal-type ServicePrincipal \
--scope /subscriptions/${YOUR_SUBSCRIPTION_ID}/resourceGroups/${YOUR_RESOURCE_GROUP}/providers/Microsoft.Storage/storageAccounts/${YOUR_STORAGE_ACCOUNT_NAME}
Navigate to the 'Upload Data' page on the left navigation bar in the Arize platform. From there, select the 'Azure Blob Storage' card to begin a new file import job.

Fill in the file path where you would like Arize to pull your model's inferences. Arize will automatically infer your bucket name and prefix.

Also specify your Azure AD Tenant ID and Azure Storage Account Name. The Tenant ID can be found in the following page on the portal:
Search for "Azure Active Directory"

Take note of your tenant ID:

In this example, you might have a bucket and folder named
azure
://example-demo-bucket/click-thru-rate/production/v1/
that contains parquet files of your model inferences. Your bucket name is example-demo-bucket
and your prefix is click-thru-rate/production/v1/
.The file structure can take into consideration various model environments (training, production, etc) and locations of ground truth. In addition, Azure blob store import allows recursive operations. This means that it will include all nested subdirectories within the specified bucket prefix, regardless of the number or depth of these directories
File Directory Example
There are multiple ways to structure your file directory. If actuals and predictions can be sent together, simply store this data in a the same file and import this data together through a single file importer job.
In the case of delayed actuals, we recommend you separate your predictions and actuals into separate folders and loading this data through two separate file importer jobs. Learn more here.
azure://bucket1/click-thru-rate/production/prediction/
├── 11-19-2022.parquet
├── 11-20-2022.parquet
├── 11-21-2022.parquet
azure://bucket1/click-thru-rate/production/actuals/
├── 12-1-2022.parquet # same prediction id column, model, and space as the corresponding prediction
├── 12-2-2022.parquet
└── 12-3-2022.parquet
In your container metadata, add an entry with the key as
arize_ingestion_key
and the provided tag value. - In Arize UI: Copy the
arize_ingestion_key
value. - In Azure UI: Navigate to your Container -> Settings -> Metadata.

Click on Metadata and fill out the key value pair defined in the Arize UI
Model schema parameters are a way of organizing model inference data to ingest to Arize. When configuring your schema, be sure to match your data column headers with the model schema.
You can either use a form or a simple JSON-based schema to specify the column mapping.
Arize supports CSV, Parquet, Avro, and Apache Arrow. Refer here for a list of the expected data types by input type.



Property | Description | Required |
---|---|---|
prediction_ID | The unique identifier of a specific prediction. Limited to 128 characters. | Required |
timestamp | The timestamp of the prediction in seconds or an RFC3339 timestamp | Optional, defaults to current timestamp at file ingestion time |
prediction_label | Column name for the prediction value | |
prediction_score | Column name for the predicted score | |
actual_label | Column name for the actual or ground truth value | Optional for production records |
actual_score | Column name for the ground truth score | |
prediction_group_id | Column name for ranking groups or lists in ranking models | |
rank | Column name for rank of each element on the its group or list | |
relevance_label | Column name for ranking actual or ground truth value | |
relevance_score | Column name for ranking ground truth score | |
features | A string prefix to describe a column feature/ . Features must be sent in the same file as predictions | Arize automatically infers columns as features. Choose between feature prefixing OR inferred features. |
tags | A string prefix to describe a column tag/ . Tags must be sent in the same file as predictions and features | Optional |
shap_values | A string prefix to describe a column shap/ . SHAP must be sent in the same file as predictions or with a matching prediction_id | Optional |
version | A column to specify model version. version/ assigns a version to the corresponding data within a column, or configure your version within the UI | Optional, defaults to 'no_version' |
batch_id | Distinguish different batches of data under the same model_id and model_version. Must be specified as a constant during job setup or in the schema | Optional for validation records only |
exclude | A list of columns to exclude if the features property is not included in the ingestion schema | Optional |
embedding_features | A list of embedding columns, required vector column, optional raw data column, and optional link to data column. Learn more here | Optional |
Once you fill in your applicable predictions, actuals, and model inputs, click 'Validate Schema' to visualize your model schema in the Arize UI. Check that your column names and corresponding data match for a successful import job.

Once finished, your import job will be created and will start polling your bucket for files.
If your model receives delayed actuals, connect your predictions and actuals using the same prediction ID, which links your data together in the Arize platform. Arize regularly checks your data source for both predictions and actuals, and ingests them separately as they become available. Learn more here.
Arize will attempt a dry run to validate your job for any access, schema, or record-level errors. If the dry run is successful, you can proceed to create the import job. From there, you will be taken to the 'Job Status' tab.
All active jobs will regularly sync new data from your data source with Arize. You can view the job details by clicking on the job ID, which reveals more information about the job.

Job Status tab showing job listings
To pause, delete, or edit your file schema, click on 'Job Options'.
- Delete a job if it is no longer needed or if you made an error connecting to the wrong bucket. This will set your job status as 'deleted' in Arize.
- Pause a job if you have a set cadence to update your table. This way, you can 'start job' when you know there will be new data to reduce query costs. This will set your job status as 'inactive' in Arize.
- Edit a file schema if you have added, renamed, or missed a column in the original schema declaration.
An import job may run into a few problems. Use the dry run and job details UI to troubleshoot and quickly resolve data ingestion issues.
If there is an error validating a file against the model schema, Arize will surface an actionable error message. From there, click on the 'Fix Schema' button to adjust your model schema.
%20(1).png?alt=media&token=a21e9824-3530-4f16-a8e9-2269f140c422)
If your dry run is successful, but your job fails, click on the job ID to view the job details. This uncovers job details such as information about the file path or query id, the last import job, potential errors, and error locations.
.png?alt=media&token=9f927041-526f-48f0-b601-3f37e9a8b481)
Once you've identified the job failure point, fix the file errors and reupload the file to Arize with a new name.