Quickstart: Prompts
Last updated
Last updated
Prompt playground can be accessed from the left navbar of Phoenix. Start by entering your API key for whichever model provider you'd like to use.
From here, you can directly prompt your model by modifying either the system or user prompt, and pressing the Run button on the top right.
Let's start by comparing a few different prompt variations. Add two additional prompts using the +Prompt button, and update the system and user prompts like so:
System prompt #1:
System prompt #2:
System prompt #3:
User prompt (use this for all three):
Your playground should look something like this:
Let's run it and compare results:
Prompt playground can be used to run a series of dataset rows through your prompts. To start off, we'll need a dataset. Phoenix has many options to upload a dataset, to keep things simple here, we'll directly upload a CSV. Download the articles summaries file linked below:
Next, create a new dataset from the Datasets tab in Phoenix, and specify the input and output columns like so:
Now we can return to Prompt Playground, and this time choose our new dataset from the "Test over dataset" dropdown.
We'll also need to update our prompt to look for the {{input_article}}
column in our dataset.
Now if we run our prompt(s), each row of the dataset will be run through each variation of our prompt.
And if you return to view your dataset, you'll see the details of that run saved as an experiment.
From here, you could evaluate that experiment to test its performance, or add complexity to your prompts by including different tools, output schemas, and models to test against.
This is just a basic example of prompt playground. The real power of the tool lies in replaying spans, or running over a dataset. Let's look at a dataset example. If you're interested in replaying spans, see .