Arize AI
Search…
⌃K

Hugging Face

Arize helps you visualize your model performance, understand drift & data quality issues, and share insights learned from your models. Hugging Face is a library offers both models on-demand in its Model Hub as well as APIs for you to fine-tune NLP models and serve them directly from Hugging Face.
For additional context, check out the blog post on our partnership with Hugging Face.

Inference API Integration

Hugging Face Inference API allows you to access public model and ones you have uploaded through Model Hug. Depending on the task, Arize can be directly integrated during production during the query function, or during your model's pipeline.
Integration can be done in 4 simple steps: (1) Set up your Arize and Hugging Face API/SPACE Key, (2) Process output (and/or features) (3) Log to Arize (4) Reformat and return outputs

Fine-Tune a Sentiment Classification Model Example

Please follow this notebook tutorial where we walk you through working with the Hugging Face ecosystem to fine-tune a pre-trained language model for a sentiment classification task. In addition, we will be extracting text embedding vectors and send them to Arize, where we will leverage our embedding tools to learn about and troubleshoot our dataset.

Fine-Tune a Named Entity Recognition Model Example

Please follow this notebook tutorial where we walk you through working with the Hugging Face ecosystem to fine-tune a pre-trained language model for a token classification task, i.e., Named Entity Recognition (NER). In addition, we will be extracting token embedding vectors and send them to Arize, where we will leverage our embedding tools to learn about and troubleshoot our dataset.

Zero-shot Text Classification Example

For zero-shot text classification problem, we log to Arize underModelType.SCORE_CATEGORICAL since we want to record both class label and probability score.
Depending the specific NLP task and model pipeline, your response will be formatted differently. You may need to update Step 2 (Processing Outputs) to match model output and Step 3 (Logging to Arize) to match the specific NLP task goals.
import numpy as np
import requests
from arize.api import Client
from arize.types import ModelTypes
import json
import requests
import uuid
# Step 1: Set up Arize and Hugging Face API/SPACE Key and Tokens
ARIZE_SPACE_KEY = 'YOUR_ARIZE_SPACE_KEY'
ARIZE_API_KEY = 'YOUR_ARIZE_API_KEY'
arize = Client(space_key=ARIZE_SPACE_KEY, api_key=ARIZE_API_KEY)
API_URL = "https://api-inference.huggingface.co/models/facebook/bart-large-mnli"
YOUR_HUGGINGFACE_API_KEY = 'YOUR_HUGGINGFACE_API_KEY'
headers = {"Authorization": "Bearer {}".format(YOUR_HUGGINGFACE_API_KEY}
def query(payload):
# Step 1: Standard request to Hugging Face Inference API
data = json.dumps(payload)
response = requests.request("POST", API_URL, headers=headers, data=data)
output = json.loads(response.content.decode("utf-8"))
# Step 2: Process output (and/or features) for logging to Arize
idx = np.argmax(output['scores'])
prediction, score = output['labels'][idx], output['scores'][idx]
# optional, if you want to log model features to Arize
features = feature_pipeline(data)
# Step 3: Log to Arize
arize_response = arize.log(
model_id='facebook/bart-large-mnli',
model_version='1.0',
model_type=ModelTypes.SCORE_CATEGORICAL,
prediction_id = str(uuid.uuid4()),
prediction_label=(prediction, score),
)
arize_success = arize_response.result().status_code == 200
# Step 4: Return the formatted output
return {'prediction': prediction,
'score': score,
'arize-success': arize_success}
If using version < 4.0.0, replace space_key=ARIZE_SPACE_KEY with organization_key=ARIZE_SPACE_KEY
On the client side of production, results can be queries through directly calling the function
data = query(
{
"inputs": """
I recently bought a device from your company
but it is not working as advertised and
I would like to get reimbursed!
""",
"parameters": {"candidate_labels": ["refund", "legal", "faq"]},
}
)
>>> {'prediction': 'refund', 'score': 0.8680453300476074, 'arize-success': True}