Phoenix
TypeScript APIPython APICommunityGitHubPhoenix Cloud
  • Documentation
  • Self-Hosting
  • Cookbooks
  • Learn
  • Integrations
  • SDK and API Reference
  • Release Notes
  • Overview
  • LLM Providers
    • Amazon Bedrock
      • Amazon Bedrock Tracing
      • Amazon Bedrock Evals
      • Amazon Bedrock Agents Tracing
    • Anthropic
      • Anthropic Tracing
      • Anthropic Evals
    • Google Gen AI
      • Google GenAI Tracing
      • Gemini Evals
    • LiteLLM
      • LiteLLM Tracing
      • LiteLLM Evals
    • MistralAI
      • MistralAI Tracing
      • MistralAI Evals
    • Groq
      • Groq Tracing
    • OpenAI
      • OpenAI Tracing
      • OpenAI Evals
      • OpenAI Agents SDK Tracing
      • OpenAI Node.js SDK
    • VertexAI
      • VertexAI Tracing
      • VertexAI Evals
  • Frameworks
    • Agno
      • Agno Tracing
    • AutoGen
      • AutoGen Tracing
    • BeeAI
      • BeeAI Tracing (JS)
    • CrewAI
      • CrewAI Tracing
    • DSPy
      • DSPy Tracing
    • Flowise
      • Flowise Tracing
    • Guardrails AI
      • Guardrails AI Tracing
    • Haystack
      • Haystack Tracing
    • Hugging Face smolagents
      • smolagents Tracing
    • Instructor
      • Instructor Tracing
    • LlamaIndex
      • LlamaIndex Tracing
      • LlamaIndex Workflows Tracing
    • LangChain
      • LangChain Tracing
      • LangChain.js
    • LangGraph
      • LangGraph Tracing
  • LangFlow
    • LangFlow Tracing
  • Mastra
    • Mastra Tracing
  • Model Context Protocol
    • Phoenix MCP Server
    • MCP Tracing
  • Portkey
    • Portkey Tracing
  • Prompt Flow
    • Prompt Flow Tracing
  • Pydantic AI
    • Pydantic AI Tracing
    • Pydantic AI Evals
  • Vercel
    • Vercel AI SDK Tracing (JS)
  • Evaluation Libraries
    • Cleanlab
    • Ragas
  • Vector Databases
    • MongoDB
    • Pinecone
    • Qdrant
    • Weaviate
    • Zilliz / Milvus
Powered by GitBook
On this page
  • Launch Phoenix
  • Setup
  • Example Agent Walkthrough
  • What Gets Traced
  • Trace Attributes

Was this helpful?

Edit on GitHub
  1. Mastra

Mastra Tracing

PreviousMastraNextModel Context Protocol

Last updated 1 day ago

Was this helpful?

Platform

  • Tracing
  • Prompts
  • Datasets and Experiments
  • Evals

Software

  • Python Client
  • TypeScript Client
  • Phoenix Evals
  • Phoenix Otel

Resources

  • Container Images
  • X
  • Blue Sky
  • Blog

Integrations

  • OpenTelemetry
  • AI Providers

© 2025 Arize AI

Launch Phoenix

Setup

Install packages:

npm install @arizeai/openinference-mastra

Initialize OpenTelemetry tracing for your Mastra application:

import { Mastra } from "@mastra/core";
import {
  OpenInferenceOTLPTraceExporter,
  isOpenInferenceSpan,
} from "@arizeai/openinference-mastra";

export const mastra = new Mastra({
  // ... other config
  telemetry: {
    serviceName: "openinference-mastra-agent", // you can rename this to whatever you want to appear in the Phoenix UI
    enabled: true,
    export: {
      type: "custom",
      exporter: new OpenInferenceOTLPTraceExporter({
        url: process.env.PHOENIX_COLLECTOR_ENDPOINT,
        headers: {
          Authorization: `Bearer ${process.env.PHOENIX_API_KEY}`, // if you're self-hosting Phoenix without auth, you can remove this header
        },
        // optional: filter out http, and other node service specific spans
        // they will still be exported to Mastra, but not to the target of
        // this exporter
        spanFilter: isOpenInferenceSpan,
      }),
    },
  },
});

From here you can use Mastra as normal. All agents, workflows, and tool calls will be automatically traced.

Example Agent Walkthrough

Here is a full project example to get you started:

Launch Phoenix using one of the methods above

The rest of this tutorial will assume you are running Phoenix locally on the default localhost:6006 port.

Create a new Mastra project

npm create mastra@latest
# answer the prompts, include agent, tools, and the example when asked

cd chosen-project-name
npm install --save @arizeai/openinference-mastra

Connect to Phoenix

Add the OpenInference telemetry code to your index.js file. The complete file should now look like this:

// chosen-project-name/src/index.ts
import { Mastra } from "@mastra/core/mastra";
import { createLogger } from "@mastra/core/logger";
import { LibSQLStore } from "@mastra/libsql";
import {
  isOpenInferenceSpan,
  OpenInferenceOTLPTraceExporter,
} from "@arizeai/openinference-mastra";

import { weatherAgent } from "./agents/weather-agent";

export const mastra = new Mastra({
  agents: { weatherAgent },
  storage: new LibSQLStore({
    url: ":memory:",
  }),
  logger: createLogger({
    name: "Mastra",
    level: "info",
  }),
  telemetry: {
    enabled: true,
    serviceName: "weather-agent",
    export: {
      type: "custom",
      exporter: new OpenInferenceOTLPTraceExporter({
        url: process.env.PHOENIX_COLLECTOR_ENDPOINT,
        headers: {
          Authorization: `Bearer ${process.env.PHOENIX_API_KEY}`,
        },
        spanFilter: isOpenInferenceSpan,
      }),
    },
  },
});

Run the Agent

npm run dev

View your Traces in Phoenix

What Gets Traced

The Mastra instrumentation automatically captures:

  • Agent Executions: Complete agent runs including instructions, model calls, and responses

  • Workflow Steps: Individual step executions within workflows, including inputs, outputs, and timing

  • Tool Calls: Function calls made by agents, including parameters and results

  • LLM Interactions: All model calls with prompts, responses, token usage, and metadata

  • RAG Operations: Vector searches, document retrievals, and embedding generations

  • Memory Operations: Agent memory reads and writes

  • Error Handling: Exceptions and error states in your AI pipeline

Trace Attributes

Phoenix will capture detailed attributes for each trace:

  • Agent Information: Agent name, instructions, model configuration

  • Workflow Context: Workflow name, step IDs, execution flow

  • Tool Metadata: Tool names, parameters, execution results

  • Model Details: Model name, provider, token usage, response metadata

  • Performance Metrics: Execution time, token counts, costs

  • User Context: Session IDs, user information (if provided)

You can view all of this information in the Phoenix UI to debug issues, optimize performance, and understand your application's behavior.

Sign up for Phoenix:

Sign up for an Arize Phoenix account at

Set your Phoenix endpoint and API Key:

export PHOENIX_COLLECTOR_ENDPOINT="https://app.phoenix.arize.com/v1/traces"
export OTEL_EXPORTER_OTLP_HEADERS="api_key=YOUR_PHOENIX_API_KEY"

Your Phoenix API key can be found on the Keys section of your .

Launch your local Phoenix instance:

pip install arize-phoenix
phoenix serve

This will expose the Phoenix on localhost:6006

For details on customizing a local terminal deployment, see .

Set your Phoenix endpoint and API Key:

export PHOENIX_COLLECTOR_ENDPOINT="http://localhost:6006/v1/traces"
export PHOENIX_API_KEY="YOUR PHOENIX API KEY" # only necessary if you've enabled auth

Pull latest Phoenix image from :

docker pull arizephoenix/phoenix:latest

Run your containerized instance:

docker run -p 6006:6006 arizephoenix/phoenix:latest

This will expose the Phoenix on localhost:6006

Set your Phoenix endpoint and API Key:

export PHOENIX_COLLECTOR_ENDPOINT="http://localhost:6006/v1/traces"
export PHOENIX_API_KEY="YOUR PHOENIX API KEY" # only necessary if you've enabled auth

For more info on using Phoenix with Docker, see .

https://app.phoenix.arize.com/login
dashboard
Terminal Setup
Docker Hub
Docker