How to Generate Your Own Embedding
Embedding vectors are generally extracted from the activation values of one or many hidden layers of your model.
Ways to obtain embedding vectors
In general, there are many ways of obtaining embedding vectors, including:
Word embeddings
Autoencoder Embeddings
Generative Adversarial Networks (GANs)
Pre-trained Embeddings
Given the accessibility to pre-trained transformer models, we will focus on them. This involves using models, such as BERT or GPT-x, trained on a large dataset and made publicly available, then fine-tuning them on a specific task.
Use Case Examples
Once established the choice of models to generate embeddings, the question is: how? The way you generate your embedding must be such that the resulting vector represents your input according to your use case.
If you are working on image classification, the model will take an image and classify it into a given set of categories. Each of our embedding vectors should be representative of the corresponding entire image input.
First, we need to use a feature_extractor
that will take an image and prepare it for the large pre-trained image model.
Then, we pass the results from the feature_extractor
to our model
. In PyTorch, we use torch.no_grad()
since we don't need to compute the gradients for backward propagation, we are not training the model in this example.
It is imperative that these outputs contain the activation values of the hidden layers of the model since you will be using them to construct your embeddings. In this scenario, we will use just the last hidden layer.
Finally, since we want the embedding vector to represent the entire image, we will average across the second dimension, representing the areas of the image.
Additional Resources
Check out our tutorials on how to generate embeddings for different use cases using large, pre-trained models.
Last updated