Phoenix
TypeScript APIPython APICommunityGitHubPhoenix Cloud
  • Documentation
  • Self-Hosting
  • Cookbooks
  • Learn
  • Integrations
  • SDK and API Reference
  • Release Notes
  • Overview
  • LLM Providers
    • Amazon Bedrock
      • Amazon Bedrock Tracing
      • Amazon Bedrock Evals
      • Amazon Bedrock Agents Tracing
    • Anthropic
      • Anthropic Tracing
      • Anthropic Evals
    • Google Gen AI
      • Google GenAI Tracing
      • Gemini Evals
    • LiteLLM
      • LiteLLM Tracing
      • LiteLLM Evals
    • MistralAI
      • MistralAI Tracing
      • MistralAI Evals
    • Groq
      • Groq Tracing
    • OpenAI
      • OpenAI Tracing
      • OpenAI Evals
      • OpenAI Agents SDK Tracing
      • OpenAI Node.js SDK
    • VertexAI
      • VertexAI Tracing
      • VertexAI Evals
  • Frameworks
    • Agno
      • Agno Tracing
    • AutoGen
      • AutoGen Tracing
    • BeeAI
      • BeeAI Tracing (JS)
    • CrewAI
      • CrewAI Tracing
    • DSPy
      • DSPy Tracing
    • Flowise
      • Flowise Tracing
    • Guardrails AI
      • Guardrails AI Tracing
    • Haystack
      • Haystack Tracing
    • Hugging Face smolagents
      • smolagents Tracing
    • Instructor
      • Instructor Tracing
    • LlamaIndex
      • LlamaIndex Tracing
      • LlamaIndex Workflows Tracing
    • LangChain
      • LangChain Tracing
      • LangChain.js
    • LangGraph
      • LangGraph Tracing
  • LangFlow
    • LangFlow Tracing
  • Model Context Protocol
    • Phoenix MCP Server
    • MCP Tracing
  • Prompt Flow
    • Prompt Flow Tracing
  • Vercel
    • Vercel AI SDK Tracing (JS)
  • Evaluation Libraries
    • Cleanlab
    • Ragas
  • Vector Databases
    • MongoDB
    • Pinecone
    • Qdrant
    • Weaviate
    • Zilliz / Milvus
Powered by GitBook

Platform

  • Tracing
  • Prompts
  • Datasets and Experiments
  • Evals

Software

  • Python Client
  • TypeScript Client
  • Phoenix Evals
  • Phoenix Otel

Resources

  • Container Images
  • X
  • Blue Sky
  • Blog

Integrations

  • OpenTelemetry
  • AI Providers

© 2025 Arize AI

On this page
  • Launch Phoenix
  • Install
  • Setup
  • Run Bedrock
  • Observe
  • Resources

Was this helpful?

  1. LLM Providers
  2. Amazon Bedrock

Amazon Bedrock Tracing

Instrument LLM calls to AWS Bedrock via the boto3 client using the BedrockInstrumentor

PreviousAmazon BedrockNextAmazon Bedrock Evals

Last updated 6 days ago

Was this helpful?

boto3 provides Python bindings to AWS services, including Bedrock, which provides access to a number of foundation models. Calls to these models can be instrumented using OpenInference, enabling OpenTelemetry-compliant observability of applications built using these models. Traces collected using OpenInference can be viewed in Phoenix.

OpenInference Traces collect telemetry data about the execution of your LLM application. Consider using this instrumentation to understand how a Bedrock-managed models are being called inside a complex system and to troubleshoot issues such as extraction and response synthesis.

Launch Phoenix

Install

pip install openinference-instrumentation-bedrock opentelemetry-exporter-otlp

Setup

Connect to your Phoenix instance using the register function.

from phoenix.otel import register

# configure the Phoenix tracer
tracer_provider = register(
  project_name="my-llm-app", # Default is 'default'
  auto_instrument=True # Auto-instrument your app based on installed OI dependencies
)

After connecting to your Phoenix server, instrument boto3 prior to initializing a bedrock-runtime client. All clients created after instrumentation will send traces on all calls to invoke_model.

import boto3

session = boto3.session.Session()
client = session.client("bedrock-runtime")

Run Bedrock

From here you can run Bedrock as normal

prompt = (
    b'{"prompt": "Human: Hello there, how are you? Assistant:", "max_tokens_to_sample": 1024}'
)
response = client.invoke_model(modelId="anthropic.claude-v2", body=prompt)
response_body = json.loads(response.get("body").read())
print(response_body["completion"])

Observe

Now that you have tracing setup, all calls to invoke_model will be streamed to your running Phoenix for observability and evaluation.

Resources

Example Tracing & Eval Notebook
OpenInference package
Working examples

Sign up for Phoenix:

Sign up for an Arize Phoenix account at

Install packages:

pip install arize-phoenix-otel

Set your Phoenix endpoint and API Key:

import os

# Add Phoenix API Key for tracing
PHOENIX_API_KEY = "ADD YOUR API KEY"
os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={PHOENIX_API_KEY}"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "https://app.phoenix.arize.com"

Your Phoenix API key can be found on the Keys section of your .

Launch your local Phoenix instance:

pip install arize-phoenix
phoenix serve

For details on customizing a local terminal deployment, see .

Install packages:

pip install arize-phoenix-otel

Set your Phoenix endpoint:

import os

os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:6006"

See for more details.

docker pull arizephoenix/phoenix:latest

Run your containerized instance:

docker run -p 6006:6006 arizephoenix/phoenix:latest

This will expose the Phoenix on localhost:6006

Install packages:

pip install arize-phoenix-otel

Set your Phoenix endpoint:

import os

os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:6006"

Install packages:

pip install arize-phoenix

Launch Phoenix:

import phoenix as px
px.launch_app()

Pull latest Phoenix image from :

For more info on using Phoenix with Docker, see .

By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See or use one of the other deployment options to retain traces.

https://app.phoenix.arize.com/login
dashboard
Terminal Setup
Terminal
Docker Hub
Docker
self-hosting