Phoenix
TypeScript APIPython APICommunityGitHubPhoenix Cloud
English
  • Documentation
  • Self-Hosting
  • Cookbooks
  • Integrations
  • SDK and API Reference
  • Release Notes
  • Resources
English
  • Arize Phoenix
  • Quickstarts
  • User Guide
  • Environments
  • Phoenix Demo
  • 🔭Tracing
    • Overview: Tracing
    • Quickstart: Tracing
      • Quickstart: Tracing (Python)
      • Quickstart: Tracing (TS)
    • Features: Tracing
      • Projects
      • Annotations
      • Sessions
    • Integrations: Tracing
    • How-to: Tracing
      • Setup Tracing
        • Setup using Phoenix OTEL
        • Setup using base OTEL
        • Using Phoenix Decorators
        • Setup Tracing (TS)
        • Setup Projects
        • Setup Sessions
      • Add Metadata
        • Add Attributes, Metadata, Users
        • Instrument Prompt Templates and Prompt Variables
      • Annotate Traces
        • Annotating in the UI
        • Annotating via the Client
        • Running Evals on Traces
        • Log Evaluation Results
      • Importing & Exporting Traces
        • Import Existing Traces
        • Export Data & Query Spans
        • Exporting Annotated Spans
      • Advanced
        • Mask Span Attributes
        • Suppress Tracing
        • Filter Spans to Export
        • Capture Multimodal Traces
    • Concepts: Tracing
      • How Tracing Works
      • What are Traces
      • Concepts: Annotations
      • FAQs: Tracing
  • 📃Prompt Engineering
    • Overview: Prompts
      • Prompt Management
      • Prompt Playground
      • Span Replay
      • Prompts in Code
    • Quickstart: Prompts
      • Quickstart: Prompts (UI)
      • Quickstart: Prompts (Python)
      • Quickstart: Prompts (TS)
    • How to: Prompts
      • Configure AI Providers
      • Using the Playground
      • Create a prompt
      • Test a prompt
      • Tag a prompt
      • Using a prompt
    • Concepts: Prompts
  • 🗄️Datasets & Experiments
    • Overview: Datasets & Experiments
    • Quickstart: Datasets & Experiments
    • How-to: Datasets
      • Creating Datasets
      • Exporting Datasets
    • Concepts: Datasets
    • How-to: Experiments
      • Run Experiments
      • Using Evaluators
  • 🧠Evaluation
    • Overview: Evals
      • Agent Evaluation
    • Quickstart: Evals
    • How to: Evals
      • Pre-Built Evals
        • Hallucinations
        • Q&A on Retrieved Data
        • Retrieval (RAG) Relevance
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Reference (citation) Link
        • User Frustration
        • SQL Generation Eval
        • Agent Function Calling Eval
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Audio Emotion Detection
      • Eval Models
      • Build an Eval
      • Build a Multimodal Eval
      • Online Evals
      • Evals API Reference
    • Concepts: Evals
      • LLM as a Judge
      • Eval Data Types
      • Evals With Explanations
      • Evaluators
      • Custom Task Evaluation
  • 🔍Retrieval
    • Overview: Retrieval
    • Quickstart: Retrieval
    • Concepts: Retrieval
      • Retrieval with Embeddings
      • Benchmarking Retrieval
      • Retrieval Evals on Document Chunks
  • 🌌inferences
    • Quickstart: Inferences
    • How-to: Inferences
      • Import Your Data
        • Prompt and Response (LLM)
        • Retrieval (RAG)
        • Corpus Data
      • Export Data
      • Generate Embeddings
      • Manage the App
      • Use Example Inferences
    • Concepts: Inferences
    • API: Inferences
    • Use-Cases: Inferences
      • Embeddings Analysis
  • ⚙️Settings
    • Access Control (RBAC)
    • API Keys
    • Data Retention
Powered by GitBook

Platform

  • Tracing
  • Prompts
  • Datasets and Experiments
  • Evals

Software

  • Python Client
  • TypeScript Client
  • Phoenix Evals
  • Phoenix Otel

Resources

  • Container Images
  • X
  • Blue Sky
  • Blog

Integrations

  • OpenTelemetry
  • AI Providers

© 2025 Arize AI

On this page
  • Overview
  • Launch Phoenix
  • Using Phoenix Cloud
  • Using Self-hosted Phoenix
  • Connect to Phoenix
  • Trace all calls made to a library
  • View your Traces in Phoenix
  • Next Steps

Was this helpful?

Edit on GitHub
  1. Tracing
  2. Quickstart: Tracing

Quickstart: Tracing (TS)

PreviousQuickstart: Tracing (Python)NextFeatures: Tracing

Last updated 8 days ago

Was this helpful?

Overview

Phoenix supports three main options to collect traces:

  1. Use to capture all calls made to supported frameworks.

  2. Use instrumentation. Supported in and , among many other languages.

This example uses options 2 and 3.

Launch Phoenix

Using Phoenix Cloud

  1. Sign up for an Arize Phoenix account at

  2. Grab your API key from the Keys option on the left bar.

  3. In your code, configure environment variables for your endpoint and API key:

# .env, or shell environment

# Add Phoenix API Key for tracing
PHOENIX_API_KEY="ADD YOUR API KEY"
# And Collector Endpoint for Phoenix Cloud
PHOENIX_COLLECTOR_ENDPOINT="https://app.phoenix.arize.com"

Using Self-hosted Phoenix

  1. In your code, configure environment variables for your endpoint and API key:

# .env, or shell environment

# Collector Endpoint for your self hosted Phoenix, like localhost
PHOENIX_COLLECTOR_ENDPOINT="http://localhost:6006"
# (optional) If authentication enabled, add Phoenix API Key for tracing
PHOENIX_API_KEY="ADD YOUR API KEY"

Connect to Phoenix

To collect traces from your application, you must configure an OpenTelemetry TracerProvider to send traces to Phoenix.

# npm, pnpm, yarn, etc
npm install @arizeai/openinference-semantic-conventions @opentelemetry/semantic-conventions @opentelemetry/api @opentelemetry/instrumentation @opentelemetry/resources @opentelemetry/sdk-trace-base @opentelemetry/sdk-trace-node @opentelemetry/exporter-trace-otlp-proto

In a new file called instrumentation.ts (or .js if applicable)

// instrumentation.ts
import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { resourceFromAttributes } from "@opentelemetry/resources";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { ATTR_SERVICE_NAME } from "@opentelemetry/semantic-conventions";

import { SEMRESATTRS_PROJECT_NAME } from "@arizeai/openinference-semantic-conventions";

diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.ERROR);

const COLLECTOR_ENDPOINT = process.env.PHOENIX_COLLECTOR_ENDPOINT;
const SERVICE_NAME = "my-llm-app";

const provider = new NodeTracerProvider({
  resource: resourceFromAttributes({
    [ATTR_SERVICE_NAME]: SERVICE_NAME,
    // defaults to "default" in the Phoenix UI
    [SEMRESATTRS_PROJECT_NAME]: SERVICE_NAME,
  }),
  spanProcessors: [
    // BatchSpanProcessor will flush spans in batches after some time,
    // this is recommended in production. For development or testing purposes
    // you may try SimpleSpanProcessor for instant span flushing to the Phoenix UI.
    new BatchSpanProcessor(
      new OTLPTraceExporter({
        url: `${COLLECTOR_ENDPOINT}/v1/traces`,
        // (optional) if connecting to Phoenix Cloud
        // headers: { "api_key": process.env.PHOENIX_API_KEY },
        // (optional) if connecting to self-hosted Phoenix with Authentication enabled
        // headers: { "Authorization": `Bearer ${process.env.PHOENIX_API_KEY}` }
      })
    ),
  ],
});

provider.register();

Remember to add your environment variables to your shell environment before running this sample! Uncomment one of the authorization headers above if you plan to connect to an authenticated Phoenix instance.

Now, import this file at the top of your main program entrypoint, or invoke it with the node cli's requireflag:

// main.ts or similar
import "./instrumentation.ts"
# in your cli, script, Dockerfile, etc
node main.ts
# in your cli, script, Dockerfile, etc
node --require ./instrumentation.ts main.ts

Our program is now ready to trace calls made by an llm library, but it will not do anything just yet. Let's choose an instrumentation library to collect our traces, and register it with our Provider.

Trace all calls made to a library

# npm, pnpm, yarn, etc
npm install openai @arizeai/openinference-instrumentation-openai

Update your instrumentation.tsfile, registering the instrumentation. Steps will vary depending on if your project is configured for CommonJS or ESM style module resolution.

// instrumentation.ts

// ... rest of imports
import OpenAI from "openai"
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";

// ... previous code

const instrumentation = new OpenAIInstrumentation();
instrumentation.manuallyInstrument(OpenAI);

registerInstrumentations({
  instrumentations: [instrumentation],
});
// instrumentation.ts

// ... rest of imports
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";

// ... previous code

registerInstrumentations({
  instrumentations: [new OpenAIInstrumentation()],
});

Your project can be configured for CommonJS or ESM via many methods. It can depend on your installed runtime (Node, Deno, etc), as well as configuration within your `package.json`. Consult your runtime documentation for more details.

Finally, in your app code, invoke OpenAI:

// main.ts
import OpenAI from "openai";

// set OPENAI_API_KEY in environment, or pass it in arguments
const openai = new OpenAI();

openai.chat.completions
  .create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Write a haiku." }],
  })
  .then((response) => {
    console.log(response.choices[0].message.content);
  })
  // for demonstration purposes, keep the node process alive long
  // enough for BatchSpanProcessor to flush Trace to Phoenix
  // with its default flush time of 5 seconds
  .then(() => new Promise((resolve) => setTimeout(resolve, 6000)));

View your Traces in Phoenix

You should now see traces in Phoenix!

Next Steps

Run Phoenix using Docker, local terminal, Kubernetes etc. For more information, see .

Starting with Node v22, Node can . If this is not supported in your runtime, ensure that you can compile your TypeScript files to JavaScript, or use JavaScript instead.

Phoenix can capture all calls made to supported libraries automatically. Just install the :

Explore tracing

View use cases to see

🔭
automatic instrumentation
base OpenTelemetry
Python
TS / JS
https://app.phoenix.arize.com/login
self-hosting
natively execute TypeScript files
integrations
Customize tracing
end-to-end examples
respective OpenInference library