Google BigQuery
Learn how to setup an import job using Google BigQuery
Navigate to the 'Upload Data' page on the left navigation bar in the Arize platform. From there, select the 'Google BQ' card or navigate to the Data Warehouse tab to start a new table import job to begin a new table import job.
Storage Selection: Google BQ

Locate the Project ID, Dataset, and Table or View name of the table/view you would like to sync from Google BigQuery.
- The GBQ Project ID is a unique identifier for a project. See here for steps on how to retrieve this ID.
- The dataset and table name correspond to the path where your table is located

Console view to find Project ID, Dataset name and Table/View name
Add your Table ID Arize. Arize will automatically parse your Dataset, Table Name, and GCP Project ID.

Example TableID
Tag your dataset/table/view with the
arize-ingestion-key
and the provided label value using the steps below. For more details, see docs on Adding labels to resources for BigQuery.In Arize UI: Copy
arize-ingestion-key
value
Copy Arize Ingestion Key
Consider creating an authorized view if you don't want to grant access to the underlying tables, or granting access to each underlying table is too cumbersome.
From UI
From CLI
- 1.In Google Cloud console: Navigate to the BigQuery SQL Workspace

- 2.Select the desired table or view, navigate to the Details tab and click "Edit Details". Under the Labels section, click "Add Labels". Add the following label:
- Key as "arize-ingestion-key"
- Value as the arize-ingestion-key value from the Arize UI
- 3.Grant the
roles/bigquery.jobUser
role to our service account. Go to the IAM page and click "Grant Access"


Add Arize service account as "Principal" with "BigQuery Job User" role
- Navigate to your table/view from the Bigquery SQL Explorer page.
- Select "Share" and click on "Permissions"
- Click "Add Principal"

- Add our service account:
[email protected]
as a BigQuery Data Viewer, and click "Save"

- For a view, you must grant access to all underlying tables, so you must repeat these step for all the underlying tables.
You can create a cloud shell instance from the UI to run the following commands

- 1.Add the
arize-ingestion-key
key from the Arize UI as a label on the dataset
bq update --set_label arize-ingestion-key:${KEY_FROM_UI} ${PROJECT_ID}:${DATASET}
- 2.Grant the
roles/bigquery.jobUser
role to the Arize service account.
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:[email protected] --role=roles/bigquery.jobUser
- 3.To grant the
roles/bigquery.dataViewer
role to the Arize service account your table or view- Table:
bq add-iam-policy-binding \--member='serviceAccount:[email protected]' \--role='roles/bigquery.dataViewer' \${PROJECT_ID}:${DATASET}.${TABLE}
From UI
From CLI
- 1.In Google Cloud console: Navigate to the BigQuery SQL Workspace

- 2.Select the desired dataset, and click "Edit Details". Under the Labels section, click "Add Labels". Add the following label:
- Key as "arize-ingestion-key"
- Value as the arize-ingestion-key value copied from the Arize UI
-
- 3.Grant the
roles/bigquery.jobUser
role to the Arize service account. Go to the IAM page and click "Grant Access"

- Navigate to your dataset from the Bigquery SQL Explorer page.
- Select "Sharing" and click on "Permissions"

- Click "Add Principal"

- Add Arize service account:
[email protected]
as a BigQuery Data Viewer, and click "Save"

You can create a cloud shell instance from the UI to run the following commands

- 1.Add the
arize-ingestion-key
key from the Arize UI as a label on the dataset
bq update --set_label arize-ingestion-key:${KEY_FROM_UI} ${PROJECT_ID}:${DATASET}
- 2.Grant the
roles/bigquery.jobUser
role to the Arize service account.
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:[email protected] --role=roles/bigquery.jobUser
- 3.To grant the
roles/bigquery.dataViewer
role to the Arize service account on your dataset, see the BigQuery guide to grant access to a dataset and navigate to the tabbq
.
Match your model schema to your model type and define your model schema through the form input or a json schema.

Set up model configurations

Map your table using a form

Map your table using a JSON schema
Property | Description | Required |
---|---|---|
prediction_ID | The unique identifier of a specific prediction. Limited to 128 characters. | Required |
timestamp | The timestamp of the prediction in seconds or an RFC3339 timestamp | Optional, defaults to current timestamp at file ingestion time |
change_timestamp* | Required *(only applicable for table upload) | |
prediction_label | Column name for the prediction value | |
prediction_score | Column name for the predicted score | |
actual_label | Column name for the actual or ground truth value | Optional for production records |
actual_score | Column name for the ground truth score | |
prediction_group_id | Column name for ranking groups or lists in ranking models | |
rank | Column name for rank of each element on the its group or list | |
relevance_label | Column name for ranking actual or ground truth value | |
relevance_score | Column name for ranking ground truth score | |
features | A string prefix to describe a column feature/ . Features must be sent in the same file as predictions | Arize automatically infers columns as features. Choose between feature prefixing OR inferred features. |
tags | A string prefix to describe a column tag/ . Tags must be sent in the same file as predictions and features | Optional |
shap_values | A string prefix to describe a column shap/ . SHAP must be sent in the same file as predictions or with a matching prediction_id | Optional |
version | A column to specify model version. version/ assigns a version to the corresponding data within a column, or configure your version within the UI | Optional, defaults to 'no_version' |
batch_id | Distinguish different batches of data under the same model_id and model_version. Must be specified as a constant during job setup or in the schema | Optional for validation records only |
exclude | A list of columns to exclude if the features property is not included in the ingestion schema | Optional |
embedding_features | A list of embedding columns, required vector column, optional raw data column, and optional link to data column. Learn more here | Optional |
Once finished, Arize will begin querying your table and ingesting your records as model inferences.
Once you fill in your applicable predictions, actuals, and model inputs, click 'Validate Schema' to visualize your model schema in the Arize UI. Check that your column names and corresponding data match for a successful import job.

Arize will run queries to ingest records from your table based on your configured refresh interval.
Arize will attempt a dry run to validate your job for any access, schema, or record-level errors. If the dry run is successful, you can proceed to create the import job.

From there, you will be taken to the 'Job Status' tab where you can see the status of your import jobs.

Table of your import jobs
All active jobs will regularly sync new data from your data source with Arize. You can view the job details and import progress by clicking on the job ID, which reveals more information about the job.

Audit trail of queries run on your table
To pause or edit your table schema, click on 'Job Options'.
- Delete a job if it is no longer needed or if you made an error connecting to the wrong bucket. This will set your job status as 'deleted' in Arize.
- Pause a job if you have a set cadence to update your table. This way, you can 'start job' when you know there will be new data to reduce query costs. This will set your job status as 'inactive' in Arize.

Job Status tab showing job listings
An import job may run into a few problems. Use the dry run and job details UI to troubleshoot and quickly resolve data ingestion issues.
If there is an error validating a file or table against the model schema, Arize will surface an actionable error message. From there, click on the 'Fix Schema' button to adjust your model schema.
.png?alt=media&token=09bd2283-5215-4bd2-abe3-8bdb03b53dfd)
If your dry run is successful, but your job fails, click on the job ID to view the job details. This uncovers job details such as information about the file path or query id, the last import job, potential errors, and error locations.

Within the Job Details section, you can select More Details on a specific query to view the start time and end time that was used in that query. The query start time represents the max value of the change_timestamp based on the previous query, and the query end time is the current day/time that the query was run. The query start time will then be updated after each query to reflect the current max
change_timestamp
. This can help debug issues specifically related to the change_timestamp
field. 
Once you've identified the job failure point, append the edited row to the end of your table with an updated change_timestamp value.
Last modified 2mo ago