How to Generate Your Own Embedding

Embedding vectors are generally extracted from the activation values of one or many hidden layers of your model.

Ways to obtain embedding vectors

In general, there are many ways of obtaining embedding vectors, including:

  1. Word embeddings

  2. Autoencoder Embeddings

  3. Generative Adversarial Networks (GANs)

  4. Pre-trained Embeddings

Given the accessibility to pre-trained transformer models, we will focus on them. This involves using models, such as BERT or GPT-x, trained on a large dataset and made publicly available, then fine-tuning them on a specific task.

Use Case Examples

Once established the choice of models to generate embeddings, the question is: how? The way you generate your embedding must be such that the resulting vector represents your input according to your use case.

If you are working on image classification, the model will take an image and classify it into a given set of categories. Each of our embedding vectors should be representative of the corresponding entire image input.

First, we need to use a feature_extractor that will take an image and prepare it for the large pre-trained image model.

inputs = feature_extractor(
    [x.convert("RGB") for x in batch["image"]], 
    return_tensors="pt"
).to(device)

Then, we pass the results from the feature_extractor to our model. In PyTorch, we use torch.no_grad() since we don't need to compute the gradients for backward propagation, we are not training the model in this example.

with torch.no_grad():
    outputs = model(**inputs)

It is imperative that these outputs contain the activation values of the hidden layers of the model since you will be using them to construct your embeddings. In this scenario, we will use just the last hidden layer.

last_hidden_state = outputs.last_hidden_state
# last_hidden_state.shape = (batch_size, num_image_tokens, hidden_size)

Finally, since we want the embedding vector to represent the entire image, we will average across the second dimension, representing the areas of the image.

embeddings = torch.mean(last_hidden_state, 1).cpu().numpy()

Additional Resources

Check out our tutorials on how to generate embeddings for different use cases using large, pre-trained models.

Use-CaseCode

NLP Multi-Class Sentiment Classification using Hugging Face

NLP Multi-Class Sentiment Classification using OpenAI

NLP Named Entity Recognition using Hugging Face

CV Image Classification using Hugging Face

Last updated

Copyright © 2023 Arize AI, Inc