Prompt Playground
Last updated
Was this helpful?
Last updated
Was this helpful?
Phoenix's Prompt Playground makes the process of iterating and testing prompts quick and easy. Phoenix's playground supports (OpenAI, Anthropic, Gemini, Azure) as well as custom model endpoints, making it the ideal prompt IDE for you to build experiment and evaluate prompts and models for your task.
Speed: Rapidly test variations in the , model, invocation parameters, , and output format.
Reproducibility: All runs of the playground are , unlocking annotations and evaluation.
Datasets: Use as a fixture to run a prompt variant through its paces and to evaluate it systematically.
Prompt Management: directly within the playground.
To learn more on how to use the playground, see Using the Playground