Azure Blob Storage
Set up an import job to ingest data into Arize from Azure
You will need to contact [email protected] to set up Azure Blob Storage. Refer to step 3 for more details.
Set up an import job to log inference files to Arize. Updates to files are checked every 10 seconds. Users generally find a sweet spot around a few hundred thousand to a million rows in each file, with the total file limit being 1GB.
Create a blob storage container and folder (optional) where you would like Arize to pull your model's inferences.
For example you might set up a container named
bucket1
and folder /click-thru-rate/production/v1/
that contains CSV files of your model inferences.In this example, your bucket name is
bucket1
and your prefix is click-thru-rate/production/v1/
There are multiple ways to structure model data. To easily ingest model inference data from storage, adopt a standardized directory structure across all models.
Find the storage account name that your container is created under, and choose one access key to share with Arize.
In Azure UI, from the Container page, navigate back to the storage account in the top left.

To get a shared key, navigate to
Access Keys
in the storage account menu.Our team will contact you to securely share your container name and storage account name, and provide your access key. Once we confirm setup is complete, begin setting up import jobs.
Navigate to the 'Upload Data' page on the left navigation bar in the Arize platform. From there, select the 'Azure Blob Storage' card to begin a new file import job.

Fill in Bucket Name and Prefix details.

In this example, you might have a bucket and folder named
azure
://example-demo-bucket/click-thru-rate/production/v1/
that contains parquet files of your model inferences. Your bucket name is example-demo-bucket
and your prefix is click-thru-rate/production/v1/
.The file structure can take into consideration various model environments (training, production, etc) and locations of ground truth.
Example 1: Predictions & Actuals Stored in Separate Folders (different prefixes)
This example contains model predictions and actuals in separate files. This helps in cases of delayed actuals.
azure://example-demo-bucket/click-thru-rate/production/
├── prediction-folder/
│ ├ ── 12-1-2022.parquet #this file can contain multiple versions
│ ├── 12-2-2022.parquet
│ ├── 12-3-2022.parquet
├── actuals-folder/
│ ├── 12-1-2022.parquet
│ ├── 12-2-2022.parquet
│ └── 12-3-2022.parquet
Example 2: Production & Training Stored in Separate Folders
This example separates model environments (production and training).
azure://example-demo-bucket/click-thru-rate/v1/