Hugging Face
Last updated
Was this helpful?
Last updated
Was this helpful?
Arize helps you visualize your model performance, understand drift & data quality issues, and share insights learned from your models. Hugging Face is a library offers both models on-demand in its Model Hub as well as APIs for you to fine-tune NLP models and serve them directly from Hugging Face.
For additional context, check out on our partnership with Hugging Face.
Hugging Face Inference API allows you to access public model and ones you have uploaded through Model Hug. Depending on the task, Arize can be directly integrated during production during the query
function, or during your model's pipeline.
Integration can be done in 4 simple steps: (1) Set up your Arize and Hugging Face API/SPACE Key, (2) Process output (and/or features) (3) Log to Arize (4) Reformat and return outputs
Please follow this where we walk you through working with the Hugging Face ecosystem to fine-tune a pre-trained language model for a sentiment classification task. In addition, we will be extracting text embedding vectors and send them to Arize, where we will leverage our embedding tools to learn about and troubleshoot our dataset.
Please follow this where we walk you through working with the Hugging Face ecosystem to fine-tune a pre-trained language model for a token classification task, i.e., Named Entity Recognition (NER). In addition, we will be extracting token embedding vectors and send them to Arize, where we will leverage our embedding tools to learn about and troubleshoot our dataset.
For zero-shot text classification problem, we log to Arize underModelType.SCORE_CATEGORICAL
since we want to record both class label and probability score.
If using version < 4.0.0, replace space_key=ARIZE_SPACE_KEY
with organization_key=ARIZE_SPACE_KEY
On the client side of production, results can be queries through directly calling the function