Mastra Tracing
Last updated
Was this helpful?
Last updated
Was this helpful?
Install packages:
Initialize OpenTelemetry tracing for your Mastra application:
From here you can use Mastra as normal. All agents, workflows, and tool calls will be automatically traced.
Here is a full project example to get you started:
The rest of this tutorial will assume you are running Phoenix locally on the default localhost:6006
port.
Add the OpenInference telemetry code to your index.js
file. The complete file should now look like this:
The Mastra instrumentation automatically captures:
Agent Executions: Complete agent runs including instructions, model calls, and responses
Workflow Steps: Individual step executions within workflows, including inputs, outputs, and timing
Tool Calls: Function calls made by agents, including parameters and results
LLM Interactions: All model calls with prompts, responses, token usage, and metadata
RAG Operations: Vector searches, document retrievals, and embedding generations
Memory Operations: Agent memory reads and writes
Error Handling: Exceptions and error states in your AI pipeline
Phoenix will capture detailed attributes for each trace:
Agent Information: Agent name, instructions, model configuration
Workflow Context: Workflow name, step IDs, execution flow
Tool Metadata: Tool names, parameters, execution results
Model Details: Model name, provider, token usage, response metadata
Performance Metrics: Execution time, token counts, costs
User Context: Session IDs, user information (if provided)
You can view all of this information in the Phoenix UI to debug issues, optimize performance, and understand your application's behavior.
Sign up for Phoenix:
Sign up for an Arize Phoenix account at
Set your Phoenix endpoint and API Key:
Your Phoenix API key can be found on the Keys section of your .