LogoLogo
Python SDKSlack
  • Documentation
  • Cookbooks
  • Self-Hosting
  • Release Notes
  • Reference
  • API Reference
    • Overview
    • Python SDK
    • OpenTelemetry SDK
    • OpenInference SDK
    • Phoenix OSS
    • GraphQL API
      • Getting Started with GraphQL
      • How To Use GraphQL
        • Forming Calls
        • Using global node IDs
        • Querying Nested Data
        • Notebook Examples
        • Mutations
      • Admin API
      • Annotations API
      • Custom Metrics API
      • Dashboards API
      • File Importer API
      • Online Tasks API
      • Metrics API
      • Models API
      • Monitors API
      • Table Importer API
      • Resource Limitations
  • Export Data API
  • Prompt Hub API
  • Authentication & security
    • Arize Private Connect
    • API Keys
    • SSO & RBAC
      • Setting Up SSO with Okta
    • Compliance
      • Arize Audit Log
    • Whitelisting
Powered by GitBook

Support

  • Chat Us On Slack
  • support@arize.com

Get Started

  • Signup For Free
  • Book A Demo

Copyright © 2025 Arize AI, Inc

On this page
  • Overview
  • Quick Start
  • OpenAI Example
  • Vertex AI Example
  • Error Handling and Fallback Strategies
  • Local Cache Fallback
  • Best Practices for Resilient Applications
  • Core Components
  • ArizePromptClient
  • Prompt
  • LLMProvider
  • PromptInputVariableFormat
  • API Reference
  • ArizePromptClient Methods
  • Prompt Methods
  • Examples
  • Creating and Using a Prompt
  • Updating an Existing Prompt
  • Using Different Variable Formats
  • Troubleshooting Common Issues

Was this helpful?

Prompt Hub API

The Arize Prompt Hub SDK provides a Python interface for managing and using prompts with the Arize AI platform. This SDK allows you to create, retrieve, update, and use prompts with various LLM providers.

This API is currently in Early Access. While we're excited to share it with you, please be aware that it may undergo significant changes, including breaking changes, as we continue development. We'll do our best to minimize disruptions, but cannot guarantee long-term backward compatibility during this phase. We value your feedback as we refine and improve the API experience.

Overview

Prompt Hub enables you to:

  • Create and store prompt templates in your Arize space

  • Retrieve prompts for use in your applications

  • Update existing prompts with new versions

  • Track prompt versions and changes

Quick Start

pip install "arize[PromptHub]"

OpenAI Example

from arize.experimental.prompt_hub import ArizePromptClient, Prompt, LLMProvider
from openai import OpenAI

# Initialize the client with your Arize credentials
prompt_client = ArizePromptClient(
    space_id='YOUR_SPACE_ID',
    api_key='YOUR_API_KEY'
)

# Create a prompt template
new_prompt = Prompt(
    name='customer_service_greeting',
    messages=[
        {
            "role": "system",
            "content": "You are a helpful customer service assistant."
        },
        {
            'role': 'user', 
            'content': 'Customer query: {query}'
        }
    ],
    provider=LLMProvider.OPENAI,
    model_name="gpt-4o"
)

# Save the prompt to Arize Prompt Hub
prompt_client.push_prompt(new_prompt)

# Use the prompt with an LLM
oai_client = OpenAI(api_key="YOUR_OPENAI_API_KEY")
prompt_vars = {"query": "When will my order arrive?"}
formatted_prompt = new_prompt.format(prompt_vars)
response = oai_client.chat.completions.create(**formatted_prompt)
print(response.choices[0].message.content)

Vertex AI Example

from arize.experimental.prompt_hub import ArizePromptClient, Prompt
import vertexai
from vertexai.generative_models import GenerativeModel
from google.oauth2 import service_account
import os

# Load credentials from the downloaded file
credentials = service_account.Credentials.from_service_account_file('path_to_your_creds.json')

# Initialize Vertex AI
project_id = "my-ai-project"  # This is in the JSON file
vertexai.init(project=project_id, location="us-central1", credentials=credentials)

c = ArizePromptClient(
    space_id='YOUR_SPACE_ID', 
    api_key='YOUR_API_KEY'
)
p = c.pull_prompt("customer_service_greeting")
prompt_vars = {"question": "where is my order?"}
vertex_prompt = p.format(prompt_vars)
model = GenerativeModel(vertex_prompt.model_name)
response = model.generate_content(vertex_prompt.messages)

Error Handling and Fallback Strategies

When working with the Prompt Hub API in production environments, it's important to implement fallback mechanisms in case the API becomes temporarily unavailable.

Local Cache Fallback

You can implement a local cache of your prompts to ensure your application continues to function even if the Prompt Hub API is unreachable:

import json
import os
from arize.experimental.prompt_hub import ArizePromptClient, Prompt, LLMProvider

class PromptManager:
    def __init__(self, space_id, api_key, cache_dir=".prompt_cache"):
        self.client = ArizePromptClient(space_id=space_id, api_key=api_key)
        self.cache_dir = cache_dir
        os.makedirs(cache_dir, exist_ok=True)
    
    def get_prompt(self, prompt_name):
        cache_path = os.path.join(self.cache_dir, f"{prompt_name}.json")
        
        try:
            # First try to get the prompt from the API
            prompt = self.client.get_prompt(prompt_name)
            
            # If successful, update the cache
            self._save_to_cache(prompt, cache_path)
            return prompt
            
        except Exception as e:
            print(f"Error accessing Prompt Hub API: {e}")
            print("Attempting to use cached prompt...")
            
            # Fall back to cached version if available
            if os.path.exists(cache_path):
                return self._load_from_cache(cache_path)
            else:
                raise ValueError(f"No cached version of prompt '{prompt_name}' available")
    
    def _save_to_cache(self, prompt, cache_path):
        # Serialize the prompt to JSON and save to cache
        with open(cache_path, 'w') as f:
            json.dump(prompt.__dict__, f)
    
    def _load_from_cache(self, cache_path):
        # Load and deserialize the prompt from cache
        with open(cache_path, 'r') as f:
            prompt_data = json.load(f)
            return Prompt(**prompt_data)

Best Practices for Resilient Applications

  1. Always cache prompts after retrieval: Update your local cache whenever you successfully retrieve a prompt.

def get_and_cache_prompt(prompt_manager, prompt_name):
    prompt = prompt_manager.get_prompt(prompt_name)
    
    # Additional logic to ensure the prompt is cached
    cache_path = os.path.join(prompt_manager.cache_dir, f"{prompt_name}.json")
    prompt_manager._save_to_cache(prompt, cache_path)
    
    return prompt
  1. Implement exponential backoff: When the API is unavailable, implement exponential backoff for retries:

import time
import random

def get_prompt_with_retry(client, prompt_name, max_retries=3):
    for attempt in range(max_retries):
        try:
            return client.get_prompt(prompt_name)
        except Exception as e:
            if attempt == max_retries - 1:
                # On last attempt, re-raise the exception
                raise
            
            # Calculate backoff time with jitter
            backoff_time = (2 ** attempt) + random.uniform(0, 1)
            print(f"Error accessing API: {e}. Retrying in {backoff_time:.2f} seconds...")
            time.sleep(backoff_time)
  1. Periodically sync your cache: Implement a background job to periodically sync your cache with the latest prompts from the API.

import threading
import time

class PromptSyncManager:
    def __init__(self, prompt_manager, sync_interval=3600):  # Default: sync every hour
        self.prompt_manager = prompt_manager
        self.sync_interval = sync_interval
        self.prompt_names = []
        self.stop_event = threading.Event()
        
    def start_sync(self, prompt_names):
        self.prompt_names = prompt_names
        threading.Thread(target=self._sync_job, daemon=True).start()
        
    def _sync_job(self):
        while not self.stop_event.is_set():
            for prompt_name in self.prompt_names:
                try:
                    self.prompt_manager.get_prompt(prompt_name)  # This will update the cache
                except Exception as e:
                    print(f"Failed to sync prompt '{prompt_name}': {e}")
            time.sleep(self.sync_interval)

Core Components

ArizePromptClient

The main client for interacting with the Arize Prompt Hub.

client = ArizePromptClient(
    space_id='YOUR_SPACE_ID',
    api_key='YOUR_API_KEY',
    base_url='https://app.arize.com'  # Optional, defaults to this value
)

Prompt

Represents a prompt template with associated metadata.

prompt = Prompt(
    name='prompt_name',                  # Required: Name of the prompt
    messages=[...],                      # Required: List of message dictionaries
    provider=LLMProvider.OPENAI,         # Required: LLM provider
    model_name="gpt-4o",                 # Required: Model name
    description="Description",           # Optional: Description of the prompt
    tags=["tag1", "tag2"],              # Optional: Tags for categorization
    input_variable_format=PromptInputVariableFormat.F_STRING  # Optional: Format for variables
)

LLMProvider

Enum for supported LLM providers:

  • LLMProvider.OPENAI: OpenAI models

  • LLMProvider.AZURE_OPENAI: Azure OpenAI models

  • LLMProvider.AWS_BEDROCK: AWS Bedrock models

  • LLMProvider.VERTEX_AI: Google Vertex AI models

  • LLMProvider.CUSTOM: Custom provider

PromptInputVariableFormat

Enum for specifying how input variables are formatted in prompts:

  • PromptInputVariableFormat.F_STRING: Single curly braces ({variable_name})

  • PromptInputVariableFormat.MUSTACHE: Double curly braces ({{variable_name}})

API Reference

ArizePromptClient Methods

pull_prompts()

Retrieves all prompts in the space.

prompts = client.pull_prompts()

pull_prompt(prompt_name)

Retrieves a specific prompt by name.

prompt = client.pull_prompt("my_prompt_name")

push_prompt(prompt, commit_message=None)

Creates a new prompt or updates an existing one.

# Create new prompt
client.push_prompt(new_prompt)

# Update existing prompt with commit message
client.push_prompt(existing_prompt, commit_message="Updated system message")

Prompt Methods

format(variables)

Formats the prompt with the given variables for use with an LLM provider.

variables = {"query": "Where is my order?", "customer_name": "John"}
formatted_prompt = prompt.format(variables)

Examples

Creating and Using a Prompt

from arize.experimental.prompt_hub import ArizePromptClient, Prompt, LLMProvider
from openai import OpenAI

# Initialize clients
prompt_client = ArizePromptClient(
    space_id='YOUR_SPACE_ID',
    api_key='YOUR_API_KEY'
)
oai_client = OpenAI(api_key="YOUR_OPENAI_API_KEY")

# Create a prompt
new_prompt = Prompt(
    name='product_recommendation',
    description="Recommends products based on user preferences",
    messages=[
        {
            "role": "system",
            "content": "You are a product recommendation assistant."
        },
        {
            'role': 'user', 
            'content': 'Customer preferences: {preferences}\nBudget: {budget}'
        }
    ],
    provider=LLMProvider.OPENAI,
    model_name="gpt-4o",
    tags=["recommendation", "e-commerce"]
)

# Save to Prompt Hub
prompt_client.push_prompt(new_prompt)

# Use the prompt
prompt_vars = {
    "preferences": "I like outdoor activities and photography",
    "budget": "$500"
}
formatted_prompt = new_prompt.format(prompt_vars)
response = oai_client.chat.completions.create(**formatted_prompt)
print(response.choices[0].message.content)

Updating an Existing Prompt

# Retrieve an existing prompt
prompt = prompt_client.pull_prompt("product_recommendation")

# Modify the prompt
prompt.messages.append({
    "role": "user",
    "content": "Also consider these additional preferences: {additional_preferences}"
})

# Update the prompt in Prompt Hub
prompt_client.push_prompt(prompt, commit_message="Added support for additional preferences")

Using Different Variable Formats

from arize.experimental.prompt_hub import PromptInputVariableFormat

# Using Mustache format (double curly braces)
mustache_prompt = Prompt(
    name="mustache_example",
    messages=[
        {
            "role": "user",
            "content": "Hello {{name}}, how can I help you today?"
        }
    ],
    provider=LLMProvider.OPENAI,
    model_name="gpt-4o",
    input_variable_format=PromptInputVariableFormat.MUSTACHE
)

# Format and use the prompt
formatted = mustache_prompt.format({"name": "Alice"})

Troubleshooting Common Issues

  1. Authentication Errors: Ensure your space_id and api_key are correct

  2. Prompt Not Found: Check that the prompt name matches exactly

  3. Formatting Errors: Verify that your variables match the placeholders in the prompt

Last updated 29 days ago

Was this helpful?