Phoenix
TypeScript APIPython APICommunityGitHubPhoenix Cloud
English
  • Documentation
  • Self-Hosting
  • Cookbooks
  • Learn
  • Integrations
  • SDK and API Reference
  • Release Notes
English
  • Arize Phoenix
  • Quickstarts
  • User Guide
  • Environments
  • Phoenix Demo
  • 🔭Tracing
    • Overview: Tracing
    • Quickstart: Tracing
      • Quickstart: Tracing (Python)
      • Quickstart: Tracing (TS)
    • Features: Tracing
      • Projects
      • Annotations
      • Sessions
    • Integrations: Tracing
    • How-to: Tracing
      • Setup Tracing
        • Setup using Phoenix OTEL
        • Setup using base OTEL
        • Using Phoenix Decorators
        • Setup Tracing (TS)
        • Setup Projects
        • Setup Sessions
      • Add Metadata
        • Add Attributes, Metadata, Users
        • Instrument Prompt Templates and Prompt Variables
      • Annotate Traces
        • Annotating in the UI
        • Annotating via the Client
        • Running Evals on Traces
        • Log Evaluation Results
      • Importing & Exporting Traces
        • Import Existing Traces
        • Export Data & Query Spans
        • Exporting Annotated Spans
      • Advanced
        • Mask Span Attributes
        • Suppress Tracing
        • Filter Spans to Export
        • Capture Multimodal Traces
    • Concepts: Tracing
      • How Tracing Works
      • What are Traces
      • Concepts: Annotations
      • FAQs: Tracing
  • 📃Prompt Engineering
    • Overview: Prompts
      • Prompt Management
      • Prompt Playground
      • Span Replay
      • Prompts in Code
    • Quickstart: Prompts
      • Quickstart: Prompts (UI)
      • Quickstart: Prompts (Python)
      • Quickstart: Prompts (TS)
    • How to: Prompts
      • Configure AI Providers
      • Using the Playground
      • Create a prompt
      • Test a prompt
      • Tag a prompt
      • Using a prompt
    • Concepts: Prompts
  • 🗄️Datasets & Experiments
    • Overview: Datasets & Experiments
    • Quickstart: Datasets & Experiments
    • How-to: Datasets
      • Creating Datasets
      • Exporting Datasets
    • Concepts: Datasets
    • How-to: Experiments
      • Run Experiments
      • Using Evaluators
  • 🧠Evaluation
    • Overview: Evals
      • Agent Evaluation
    • Quickstart: Evals
    • How to: Evals
      • Pre-Built Evals
        • Hallucinations
        • Q&A on Retrieved Data
        • Retrieval (RAG) Relevance
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Reference (citation) Link
        • User Frustration
        • SQL Generation Eval
        • Agent Function Calling Eval
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Audio Emotion Detection
      • Eval Models
      • Build an Eval
      • Build a Multimodal Eval
      • Online Evals
      • Evals API Reference
    • Concepts: Evals
      • LLM as a Judge
      • Eval Data Types
      • Evals With Explanations
      • Evaluators
      • Custom Task Evaluation
  • 🔍Retrieval
    • Overview: Retrieval
    • Quickstart: Retrieval
    • Concepts: Retrieval
      • Retrieval with Embeddings
      • Benchmarking Retrieval
      • Retrieval Evals on Document Chunks
  • 🌌inferences
    • Quickstart: Inferences
    • How-to: Inferences
      • Import Your Data
        • Prompt and Response (LLM)
        • Retrieval (RAG)
        • Corpus Data
      • Export Data
      • Generate Embeddings
      • Manage the App
      • Use Example Inferences
    • Concepts: Inferences
    • API: Inferences
    • Use-Cases: Inferences
      • Embeddings Analysis
  • ⚙️Settings
    • Access Control (RBAC)
    • API Keys
    • Data Retention
Powered by GitBook

Platform

  • Tracing
  • Prompts
  • Datasets and Experiments
  • Evals

Software

  • Python Client
  • TypeScript Client
  • Phoenix Evals
  • Phoenix Otel

Resources

  • Container Images
  • X
  • Blue Sky
  • Blog

Integrations

  • OpenTelemetry
  • AI Providers

© 2025 Arize AI

On this page
  • Pulling a prompt
  • Pulling a prompt by Name or ID
  • Pulling a prompt by Version ID
  • Pulling a prompt by Tag
  • Using a prompt

Was this helpful?

Edit on GitHub
  1. Prompt Engineering
  2. How to: Prompts

Using a prompt

PreviousTag a promptNextConcepts: Prompts

Last updated 2 months ago

Was this helpful?

Once you have tagged a version of a prompt as ready (e.x. "staging") you can pull a prompt into your code base and use it to prompt an LLM.

ℹ️ A caution about using prompts inside your application code

When integrating Phoenix prompts into your application, it's important to understand that prompts are treated as code and are stored externally from your primary codebase. This architectural decision introduces several considerations:

Key Implementation Impacts

  • Network dependencies for prompt retrieval

  • Additional debugging complexity

  • External system dependencies

Current Status

The Phoenix team is actively implementing safeguards to minimize these risks through:

  • Caching mechanisms

  • Fallback systems

Best Practices

If you choose to implement Phoenix prompts in your application, ensure you:

  1. Implement robust caching strategies

  2. Develop comprehensive fallback mechanisms

  3. Consider the impact on your application's reliability requirements

If you have any feedback on the above improvements, please let us know

To use prompts in your code you will need to install the phoenix client library.

For Python:

pip install arize-phoenix-client

For JavaScript / TypeScript:

npm install @arizeai/phoenix-client

Pulling a prompt

There are three major ways pull prompts, pull by (latest), pull by version, and pull by tag.

Pulling a prompt by Name or ID

Pulling a prompt by name or ID (e.g. the identifier) is the easiest way to pull a prompt. Note that since name and ID doesn't specify a specific version, you will always get the latest version of a prompt. For this reason we only recommend doing this during development.

from phoenix.client import Client

# Initialize a phoenix client with your phoenix endpoint
# By default it will read from your environment variables
client = Client(
 # endpoint="https://my-phoenix.com",
)

# Pulling a prompt by name
prompt_name = "my-prompt-name"
client.prompts.get(prompt_identifier=prompt_name)

Note prompt names and IDs are synonymous.

import { getPrompt } from "@arizeai/phoenix-client/prompts";

const prompt = await getPrompt({ name: "my-prompt" });
// ^ the latest version of the prompt named "my-prompt"

const promptById = await getPrompt({ promptId: "a1234" })
// ^ the latest version of the prompt with Id "a1234"

Pulling a prompt by Version ID

Pulling a prompt by version retrieves the content of a prompt at a particular point in time. The version can never change, nor be deleted, so you can reasonably rely on it in production-like use cases.

# Initialize a phoenix client with your phoenix endpoint
# By default it will read from your environment variables
client = Client(
 # endpoint="https://my-phoenix.com",
)

# The version ID can be found in the versions tab in the UI
prompt = client.prompts.get(prompt_version_id="UHJvbXB0VmVyc2lvbjoy")
print(prompt.id)
prompt.dumps()
import { getPrompt } from "@arizeai/phoenix-client/prompts";

const promptByVersionId = await getPrompt({ versionId: "b5678" })
// ^ the latest version of the prompt with Id "a1234"

Pulling a prompt by Tag

# By default it will read from your environment variables
client = Client(
 # endpoint="https://my-phoenix.com",
)

# Since tags don't uniquely identify a prompt version 
#  it must be paired with the prompt identifier (e.g. name)
prompt = client.prompts.get(prompt_identifier="my-prompt-name", tag="staging")
print(prompt.id)
prompt.dumps()

Note that tags are unique per prompt so it must be paired with the prompt_identifier

import { getPrompt } from "@arizeai/phoenix-client/prompts";

const promptByTag = await getPrompt({ tag: "staging", name: "my-prompt" });
// ^ the specific prompt version tagged "production", for prompt "my-prompt"

A Prompt pulled in this way can be automatically updated in your application by simply moving the "staging" tag from one prompt version to another.

Using a prompt

The Phoenix Client libraries make it simple to transform prompts to the SDK that you are using (no proxying necessary!)

from openai import OpenAI

prompt_vars = {"topic": "Sports", "article": "Surrey have signed Australia all-rounder Moises Henriques for this summer's NatWest T20 Blast. Henriques will join Surrey immediately after the Indian Premier League season concludes at the end of next month and will be with them throughout their Blast campaign and also as overseas cover for Kumar Sangakkara - depending on the veteran Sri Lanka batsman's Test commitments in the second half of the summer. Australian all-rounder Moises Henriques has signed a deal to play in the T20 Blast for Surrey . Henriques, pictured in the Big Bash (left) and in ODI action for Australia (right), will join after the IPL . Twenty-eight-year-old Henriques, capped by his country in all formats but not selected for the forthcoming Ashes, said: 'I'm really looking forward to playing for Surrey this season. It's a club with a proud history and an exciting squad, and I hope to play my part in achieving success this summer. 'I've seen some of the names that are coming to England to be involved in the NatWest T20 Blast this summer, so am looking forward to testing myself against some of the best players in the world.' Surrey director of cricket Alec Stewart added: 'Moises is a fine all-round cricketer and will add great depth to our squad.'"}
formatted_prompt = prompt.format(variables=prompt_vars)

# Make a request with your Prompt
oai_client = OpenAI()
resp = oai_client.chat.completions.create(**formatted_prompt)
import { getPrompt, toSDK } from "@arizeai/phoenix-client/prompts";
import OpenAI from "openai";

const openai = new OpenAI()
const prompt = await getPrompt({ name: "my-prompt" });

// openaiParameters is fully typed, and safe to use directly in the openai client
const openaiParameters = toSDK({
  // sdk does not have to match the provider saved in your prompt
  // if it differs, we will apply a best effort conversion between providers automatically
  sdk: "openai",
  prompt: questionAskerPrompt,
  // variables within your prompt template can be replaced across messages
  variables: { question: "How do I write 'Hello World' in JavaScript?" }
});

const response = await openai.chat.completions.create({
  ...openaiParameters,
  // you can still override any of the invocation parameters as needed
  // for example, you can change the model or stream the response
  model: "gpt-4o-mini",
  stream: false
})

Both the Python and TypeScript SDKs support transforming your prompts to a variety of SDKs (no proprietary SDK necessary).

  • Python - support for OpenAI, Anthropic, Gemini

  • TypeScript - support for OpenAI, Anthropic, and the Vercel AI SDK

The ID of a specific prompt version can be found in the prompt history.

Pulling by prompt by is most useful when you want a particular version of a prompt to be automatically used in a specific environment (say "staging"). To pull prompts by tag, you must Tag a prompt in the UI first.

You can control the prompt version tags in the UI.

The phoenix clients support formatting the prompt with variables, and providing the messages, model information, , and response format (when applicable).

📃
https://github.com/Arize-ai/phoenix/issues/6290
name or ID
tag
tools