Add prompt templates and variables to your dataset
Datasets are critical to evaluating and benchmarking your LLM apps across different use cases and test scenarios. Using experiments,
During instrumentation
During dataset creation
In this dataset, we are setting attributes.llm.prompt_template.variables
to a dictionary converted to a JSON string. Conforming to the openinference semantic conventions here allows you to use these attributes in prompt playground, and they will correctly import as input variables.
Here's how it looks importing the dataset into prompt playground, making it very easy to iterate on your prompt and test new outputs across many data points.
Last updated