How To: Labeling Queues
Last updated
Was this helpful?
Last updated
Was this helpful?
Labeling queues are sets of data you would like subject matter experts/3rd parties to label or score on any criteria you specify. You can use these annotations to create golden datasets from experts for fine tuning, and find examples where LLM evals and humans disagree.
What you need to use Labeling queues is:
A dataset you want to annotate
Annotator users in your space
Annotation criteria
In the settings page, you can invite your annotators by adding them as users with the account role as Annotator. They will receive an email to be added to your space and set their password.
After you have created a dataset of traces you want to evaluate, you can create a labeling queue and distribute them to your annotation team. Then, you can view your records and annotations provided.
The columns that annotators label will appear on datasets as name spaced annotation
columns (i.e. annotation.hallucination
). The latest annotation value for a specific row will be namespaced with latest.userannotation,
which can be helpful to use for experiments if you have multiple annotators labeling a dataset.
Annotators see the labeling queues they have been assigned, and the data they need to annotate, along with the label or score they need to provide in the top right. Your datasets can contain text, images, and links. Annotators can leave notes, and use the keyboard shortcuts to provide annotations faster.