Grab your API key from the Keys option on the left bar.
In your code, configure environment variables for your endpoint and API key:
# .env, or shell environment
# Add Phoenix API Key for tracing
PHOENIX_API_KEY="ADD YOUR API KEY"
# And Collector Endpoint for Phoenix Cloud
PHOENIX_COLLECTOR_ENDPOINT="https://app.phoenix.arize.com"
Using Self-hosted Phoenix:
Run Phoenix using Docker, local terminal, Kubernetes etc. For more information, see Self-Hosting
In your code, configure environment variables for your endpoint and API key:
# .env, or shell environment
# Collector Endpoint for your self hosted Phoenix, like localhost
PHOENIX_COLLECTOR_ENDPOINT="http://localhost:6006"
# (optional) If authentication enabled, add Phoenix API Key for tracing
PHOENIX_API_KEY="ADD YOUR API KEY"
Connect to Phoenix
To collect traces from your application, you must configure an OpenTelemetry TracerProvider to send traces to Phoenix.
In a new file called instrumentation.ts (or .js if applicable)
// instrumentation.ts
import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { resourceFromAttributes } from "@opentelemetry/resources";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { ATTR_SERVICE_NAME } from "@opentelemetry/semantic-conventions";
import { SEMRESATTRS_PROJECT_NAME } from "@arizeai/openinference-semantic-conventions";
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.ERROR);
const COLLECTOR_ENDPOINT = process.env.PHOENIX_COLLECTOR_ENDPOINT;
const SERVICE_NAME = "my-llm-app";
const provider = new NodeTracerProvider({
resource: resourceFromAttributes({
[ATTR_SERVICE_NAME]: SERVICE_NAME,
// defaults to "default" in the Phoenix UI
[SEMRESATTRS_PROJECT_NAME]: SERVICE_NAME,
}),
spanProcessors: [
// BatchSpanProcessor will flush spans in batches after some time,
// this is recommended in production. For development or testing purposes
// you may try SimpleSpanProcessor for instant span flushing to the Phoenix UI.
new BatchSpanProcessor(
new OTLPTraceExporter({
url: `${COLLECTOR_ENDPOINT}/v1/traces`,
// (optional) if connecting to Phoenix Cloud
// headers: { "api_key": process.env.PHOENIX_API_KEY },
// (optional) if connecting to self-hosted Phoenix with Authentication enabled
// headers: { "Authorization": `Bearer ${process.env.PHOENIX_API_KEY}` }
})
),
],
});
provider.register();
Remember to add your environment variables to your shell environment before running this sample! Uncomment one of the authorization headers above if you plan to connect to an authenticated Phoenix instance.
Now, import this file at the top of your main program entrypoint, or invoke it with the node cli's requireflag:
// main.ts or similar
import "./instrumentation.ts"
# in your cli, script, Dockerfile, etc
node main.ts
# in your cli, script, Dockerfile, etc
node --require ./instrumentation.ts main.ts
Starting with Node v22, Node can natively execute TypeScript files. If this is not supported in your runtime, ensure that you can compile your TypeScript files to JavaScript, or use JavaScript instead.
Our program is now ready to trace calls made by an llm library, but it will not do anything just yet. Let's choose an instrumentation library to collect our traces, and register it with our Provider.
Update your instrumentation.tsfile, registering the instrumentation. Steps will vary depending on if your project is configured for CommonJS or ESM style module resolution.
// instrumentation.ts
// ... rest of imports
import OpenAI from "openai"
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";
// ... previous code
const instrumentation = new OpenAIInstrumentation();
instrumentation.manuallyInstrument(OpenAI);
registerInstrumentations({
instrumentations: [instrumentation],
});
// instrumentation.ts
// ... rest of imports
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";
// ... previous code
registerInstrumentations({
instrumentations: [new OpenAIInstrumentation()],
});
Your project can be configured for CommonJS or ESM via many methods. It can depend on your installed runtime (Node, Deno, etc), as well as configuration within your `package.json`. Consult your runtime documentation for more details.
Finally, in your app code, invoke OpenAI:
// main.ts
import OpenAI from "openai";
// set OPENAI_API_KEY in environment, or pass it in arguments
const openai = new OpenAI();
openai.chat.completions
.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Write a haiku." }],
})
.then((response) => {
console.log(response.choices[0].message.content);
})
// for demonstration purposes, keep the node process alive long
// enough for BatchSpanProcessor to flush Trace to Phoenix
// with its default flush time of 5 seconds
.then(() => new Promise((resolve) => setTimeout(resolve, 6000)));