Phoenix
TypeScript APIPython APICommunityGitHubPhoenix Cloud
English
  • Documentation
  • Self-Hosting
  • Cookbooks
  • SDK and API Reference
  • Release Notes
  • Resources
English
  • Arize Phoenix
  • Quickstarts
  • User Guide
  • Environments
  • Phoenix Demo
  • 🔭Tracing
    • Overview: Tracing
    • Quickstart: Tracing
      • Quickstart: Tracing (Python)
      • Quickstart: Tracing (TS)
    • Features: Tracing
      • Projects
      • Annotations
      • Sessions
    • Integrations: Tracing
      • OpenAI
      • OpenAI Agents SDK
      • LlamaIndex
      • LlamaIndex Workflows
      • LangChain
      • LangGraph
      • LiteLLM
      • Anthropic
      • Amazon Bedrock
      • Amazon Bedrock Agents
      • VertexAI
      • Model Context Protocol (MCP)
      • MistralAI
      • Google GenAI
      • Groq
      • Hugging Face smolagents
      • CrewAI
      • Haystack
      • DSPy
      • Instructor
      • OpenAI Node SDK
      • LangChain.js
      • Vercel AI SDK
      • LangFlow
      • BeeAI
      • Flowise
    • How-to: Tracing
      • Setup Tracing
        • Setup using Phoenix OTEL
        • Setup using base OTEL
        • Using Phoenix Decorators
        • Setup Tracing (TS)
        • Setup Projects
        • Setup Sessions
      • Add Metadata
        • Add Attributes, Metadata, Users
        • Instrument Prompt Templates and Prompt Variables
      • Feedback & Annotations
        • Capture Feedback on Traces
        • Evaluating Phoenix Traces
        • Log Evaluation Results
      • Importing & Exporting Traces
        • Import Existing Traces
        • Export Data & Query Spans
      • Advanced
        • Mask Span Attributes
        • Suppress Tracing
        • Filter Spans
        • Capture Multimodal Traces
    • Concepts: Tracing
      • How Tracing Works
      • FAQs: Tracing
      • What are Traces
  • 📃Prompt Engineering
    • Overview: Prompts
      • Prompt Management
      • Prompt Playground
      • Span Replay
      • Prompts in Code
    • Quickstart: Prompts
      • Quickstart: Prompts (UI)
      • Quickstart: Prompts (Python)
      • Quickstart: Prompts (TS)
    • How to: Prompts
      • Configure AI Providers
      • Using the Playground
      • Create a prompt
      • Test a prompt
      • Tag a prompt
      • Using a prompt
    • Concepts: Prompts
  • 🗄️Datasets & Experiments
    • Overview: Datasets & Experiments
    • Quickstart: Datasets & Experiments
    • How-to: Datasets
      • Creating Datasets
      • Exporting Datasets
    • Concepts: Datasets
    • How-to: Experiments
      • Run Experiments
      • Using Evaluators
  • 🧠Evaluation
    • Overview: Evals
      • Agent Evaluation
    • Quickstart: Evals
    • How to: Evals
      • Pre-Built Evals
        • Hallucinations
        • Q&A on Retrieved Data
        • Retrieval (RAG) Relevance
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Reference (citation) Link
        • User Frustration
        • SQL Generation Eval
        • Agent Function Calling Eval
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Audio Emotion Detection
      • Eval Models
      • Build an Eval
      • Build a Multimodal Eval
      • Online Evals
      • Evals API Reference
    • Concepts: Evals
      • LLM as a Judge
      • Eval Data Types
      • Evals With Explanations
      • Evaluators
      • Custom Task Evaluation
  • 🔍Retrieval
    • Overview: Retrieval
    • Quickstart: Retrieval
    • Concepts: Retrieval
      • Retrieval with Embeddings
      • Benchmarking Retrieval
      • Retrieval Evals on Document Chunks
  • 🌌inferences
    • Quickstart: Inferences
    • How-to: Inferences
      • Import Your Data
        • Prompt and Response (LLM)
        • Retrieval (RAG)
        • Corpus Data
      • Export Data
      • Generate Embeddings
      • Manage the App
      • Use Example Inferences
    • Concepts: Inferences
    • API: Inferences
    • Use-Cases: Inferences
      • Embeddings Analysis
  • 🔌INTEGRATIONS
    • Phoenix MCP Server
    • Cleanlab
    • Ragas
  • ⚙️Settings
    • Access Control (RBAC)
    • API Keys
    • Data Retention
Powered by GitBook

Platform

  • Tracing
  • Prompts
  • Datasets and Experiments
  • Evals

Software

  • Python Client
  • TypeScript Client
  • Phoenix Evals
  • Phoenix Otel

Resources

  • Container Images
  • X
  • Blue Sky
  • Blog

Integrations

  • OpenTelemetry
  • AI Providers

© 2025 Arize AI

On this page
  • Connect to Phoenix
  • Install
  • Add Tracing to your MCP Client
  • Add Tracing to your MCP Server
  • Observe
  • Resources

Was this helpful?

Edit on GitHub
  1. Tracing
  2. Integrations: Tracing

Model Context Protocol (MCP)

Phoenix provides tracing for MCP clients and servers through OpenInference. This includes the unique capability to trace client to server interactions under a single trace in the correct hierarchy.

PreviousVertexAINextMistralAI

Last updated 17 hours ago

Was this helpful?

The openinference-instrumentation-mcp instrumentor is unique compared to other OpenInference instrumentors. It does not generate any of its own telemetry. Instead, it enables context propagation between MCP clients and servers to unify traces. You still need generate OpenTelemetry traces in both the client and server to see a unified trace.

Connect to Phoenix

Install

pip install openinference-instrumentation-mcp

Because the MCP instrumentor does not generate its own telemetry, you must use it alongside other instrumentation code to see traces.

The example code below uses OpenAI agents, which you can instrument using:

pip install openinference-instrumentation-openai_agents

Add Tracing to your MCP Client

import asyncio

from agents import Agent, Runner
from agents.mcp import MCPServer, MCPServerStdio
from dotenv import load_dotenv

from phoenix.otel import register

load_dotenv()

# Connect to your Phoenix instance
tracer_provider = register(auto_instrument=True)


async def run(mcp_server: MCPServer):
    agent = Agent(
        name="Assistant",
        instructions="Use the tools to answer the users question.",
        mcp_servers=[mcp_server],
    )
    while True:
        message = input("\n\nEnter your question (or 'exit' to quit): ")
        if message.lower() == "exit" or message.lower() == "q":
            break
        print(f"\n\nRunning: {message}")
        result = await Runner.run(starting_agent=agent, input=message)
        print(result.final_output)


async def main():
    async with MCPServerStdio(
        name="Financial Analysis Server",
        params={
            "command": "fastmcp",
            "args": ["run", "./server.py"],
        },
        client_session_timeout_seconds=30,
    ) as server:
        await run(server)
        
if __name__ == "__main__":
    asyncio.run(main())

Add Tracing to your MCP Server

import json
import os
from datetime import datetime, timedelta

import openai
from dotenv import load_dotenv
from mcp.server.fastmcp import FastMCP
from pydantic import BaseModel

from phoenix.otel import register

load_dotenv()

# You must also connect your MCP server to Phoenix
tracer_provider = register(auto_instrument=True)

# Get a tracer to add additional instrumentattion
tracer = tracer_provider.get_tracer("financial-analysis-server")

# Configure OpenAI client
client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
MODEL = "gpt-4-turbo"

# Create MCP server
mcp = FastMCP("Financial Analysis Server")


class StockAnalysisRequest(BaseModel):
    ticker: str
    time_period: str = "short-term"  # short-term, medium-term, long-term


@mcp.tool()
@tracer.tool(name="MCP.analyze_stock") # this OpenInference call adds tracing to this method
def analyze_stock(request: StockAnalysisRequest) -> dict:
    """Analyzes a stock based on its ticker symbol and provides investment recommendations."""

    # Make LLM API call to analyze the stock
    prompt = f"""
    Provide a detailed financial analysis for the stock ticker: {request.ticker}
    Time horizon: {request.time_period}

    Please include:
    1. Company overview
    2. Recent financial performance
    3. Key metrics (P/E ratio, market cap, etc.)
    4. Risk assessment
    5. Investment recommendation

    Format your response as a JSON object with the following structure:
    {{
        "ticker": "{request.ticker}",
        "company_name": "Full company name",
        "overview": "Brief company description",
        "financial_performance": "Analysis of recent performance",
        "key_metrics": {{
            "market_cap": "Value in billions",
            "pe_ratio": "Current P/E ratio",
            "dividend_yield": "Current yield percentage",
            "52_week_high": "Value",
            "52_week_low": "Value"
        }},
        "risk_assessment": "Analysis of risks",
        "recommendation": "Buy/Hold/Sell recommendation with explanation",
        "time_horizon": "{request.time_period}"
    }}
    """

    response = client.chat.completions.create(
        model=MODEL,
        messages=[{"role": "user", "content": prompt}],
        response_format={"type": "json_object"},
    )

    analysis = json.loads(response.choices[0].message.content)
    return analysis

# ... define any additional MCP tools you wish

if __name__ == "__main__":
    mcp.run()

Observe

Now that you have tracing setup, all invocations of your client and server will be streamed to Phoenix for observability and evaluation, and connected in the platform.

Resources

🔭
End to end example
OpenInference package

Sign up for Phoenix:

Sign up for an Arize Phoenix account at

Install packages:

pip install arize-phoenix-otel

Set your Phoenix endpoint and API Key:

import os

# Add Phoenix API Key for tracing
PHOENIX_API_KEY = "ADD YOUR API KEY"
os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={PHOENIX_API_KEY}"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "https://app.phoenix.arize.com"

Your Phoenix API key can be found on the Keys section of your .

Launch your local Phoenix instance:

pip install arize-phoenix
phoenix serve

For details on customizing a local terminal deployment, see .

Install packages:

pip install arize-phoenix-otel

Set your Phoenix endpoint:

import os

os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:6006"

See for more details

docker pull arizephoenix/phoenix:latest

Run your containerized instance:

docker run -p 6006:6006 arizephoenix/phoenix:latest

This will expose the Phoenix on localhost:6006

Install packages:

pip install arize-phoenix-otel

Set your Phoenix endpoint:

import os

os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:6006"

Install packages:

pip install arize-phoenix

Launch Phoenix:

import phoenix as px
px.launch_app()

Pull latest Phoenix image from :

For more info on using Phoenix with Docker, see .

By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See or use one of the other deployment options to retain traces.

Docker Hub
Docker
self-hosting
https://app.phoenix.arize.com/login
dashboard
Terminal Setup
Terminal