LogoLogo
Python SDKSlack
  • Documentation
  • Cookbooks
  • Self-Hosting
  • Release Notes
  • Reference
  • Arize AI
  • Quickstarts
  • ✨Arize Copilot
  • Concepts
    • Agent Evaluation
    • Tracing
      • What is OpenTelemetry?
      • What is OpenInference?
      • Openinference Semantic Conventions
    • Evaluation
  • 🧪Develop
    • Quickstart: Experiments
    • Datasets
      • Create a dataset
      • Update a dataset
      • Export a dataset
    • Experiments
      • Run experiments
      • Run experiments with code
        • Experiments SDK differences in AX vs Phoenix
        • Log experiment results via SDK
      • Evaluate experiments
      • Evaluate experiment with code
      • CI/CD with experiments
        • Github Action Basics
        • Gitlab CI/CD Basics
      • Download experiment
    • Prompt Playground
      • Use tool calling
      • Use image inputs
    • Playground Integrations
      • OpenAI
      • Azure OpenAI
      • AWS Bedrock
      • VertexAI
      • Custom LLM Models
    • Prompt Hub
  • 🧠Evaluate
    • Online Evals
      • Run evaluations in the UI
      • Run evaluations with code
      • Test LLM evaluator in playground
      • View task details & logs
      • ✨Copilot: Eval Builder
      • ✨Copilot: Eval Analysis
      • ✨Copilot: RAG Analysis
    • Experiment Evals
    • LLM as a Judge
      • Custom Eval Templates
      • Arize Templates
        • Agent Tool Calling
        • Agent Tool Selection
        • Agent Parameter Extraction
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Hallucinations
        • Q&A on Retrieved Data
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Citation
        • User Frustration
        • SQL Generation
    • Code Evaluations
    • Human Annotations
  • 🔭Observe
    • Quickstart: Tracing
    • Tracing
      • Setup tracing
      • Trace manually
        • Trace inputs and outputs
        • Trace function calls
        • Trace LLM, Retriever and Tool Spans
        • Trace prompt templates & variables
        • Trace as Inferences
        • Send Traces from Phoenix -> Arize
        • Advanced Tracing (OTEL) Examples
      • Add metadata
        • Add events, exceptions and status
        • Add attributes, metadata and tags
        • Send data to a specific project
        • Get the current span context and tracer
      • Configure tracing options
        • Configure OTEL tracer
        • Mask span attributes
        • Redact sensitive data from traces
        • Instrument with OpenInference helpers
      • Query traces
        • Filter Traces
          • Time Filtering
        • Export Traces
        • ✨AI Powered Search & Filter
        • ✨AI Powered Trace Analysis
        • ✨AI Span Analysis & Evaluation
    • Tracing Integrations
      • OpenAI
      • OpenAI Agents SDK
      • LlamaIndex
      • LlamaIndex Workflows
      • LangChain
      • LangGraph
      • Hugging Face smolagents
      • Autogen
      • Google GenAI (Gemini)
      • Vertex AI
      • Amazon Bedrock
      • Amazon Bedrock Agents
      • MistralAI
      • Anthropic
      • LangFlow
      • Haystack
      • LiteLLM
      • CrewAI
      • Groq
      • DSPy
      • Guardrails AI
      • Prompt flow
      • Vercel AI SDK
      • Llama
      • Together AI
      • OpenTelemetry (arize-otel)
      • BeeAI
    • Evals on Traces
    • Guardrails
    • Sessions
    • Dashboards
      • Dashboard Widgets
      • Tracking Token Usage
      • ✨Copilot: Dashboard Widget Creation
    • Monitors
      • Integrations: Monitors
        • Slack
          • Manual Setup
        • OpsGenie
        • PagerDuty
      • LLM Red Teaming
    • Custom Metrics & Analytics
      • Arize Query Language Syntax
        • Conditionals and Filters
        • All Operators
        • All Functions
      • Custom Metric Examples
      • ✨Copilot: ArizeQL Generator
  • 📈Machine Learning
    • Machine Learning
      • User Guide: ML
      • Quickstart: ML
      • Concepts: ML
        • What Is A Model Schema
        • Delayed Actuals and Tags
        • ML Glossary
      • How To: ML
        • Upload Data to Arize
          • Pandas SDK Example
          • Local File Upload
            • File Upload FAQ
          • Table Ingestion Tuning
          • Wildcard Paths for Cloud Storage
          • Troubleshoot Data Upload
          • Sending Data FAQ
        • Monitors
          • ML Monitor Types
          • Configure Monitors
            • Notifications Providers
          • Programmatically Create Monitors
          • Best Practices for Monitors
        • Dashboards
          • Dashboard Widgets
          • Dashboard Templates
            • Model Performance
            • Pre-Production Performance
            • Feature Analysis
            • Drift
          • Programmatically Create Dashboards
        • Performance Tracing
          • Time Filtering
          • ✨Copilot: Performance Insights
        • Drift Tracing
          • ✨Copilot: Drift Insights
          • Data Distribution Visualization
          • Embeddings for Tabular Data (Multivariate Drift)
        • Custom Metrics
          • Arize Query Language Syntax
            • Conditionals and Filters
            • All Operators
            • All Functions
          • Custom Metric Examples
          • Custom Metrics Query Language
          • ✨Copilot: ArizeQL Generator
        • Troubleshoot Data Quality
          • ✨Copilot: Data Quality Insights
        • Explainability
          • Interpreting & Analyzing Feature Importance Values
          • SHAP
          • Surrogate Model
          • Explainability FAQ
          • Model Explainability
        • Bias Tracing (Fairness)
        • Export Data to Notebook
        • Automate Model Retraining
        • ML FAQ
      • Use Cases: ML
        • Binary Classification
          • Fraud
          • Insurance
        • Multi-Class Classification
        • Regression
          • Lending
          • Customer Lifetime Value
          • Click-Through Rate
        • Timeseries Forecasting
          • Demand Forecasting
          • Churn Forecasting
        • Ranking
          • Collaborative Filtering
          • Search Ranking
        • Natural Language Processing (NLP)
        • Common Industry Use Cases
      • Integrations: ML
        • Google BigQuery
          • GBQ Views
          • Google BigQuery FAQ
        • Snowflake
          • Snowflake Permissions Configuration
        • Databricks
        • Google Cloud Storage (GCS)
        • Azure Blob Storage
        • AWS S3
          • Private Image Link Access Via AWS S3
        • Kafka
        • Airflow Retrain
        • Amazon EventBridge Retrain
        • MLOps Partners
          • Algorithmia
          • Anyscale
          • Azure & Databricks
          • BentoML
          • CML (DVC)
          • Deepnote
          • Feast
          • Google Cloud ML
          • Hugging Face
          • LangChain 🦜🔗
          • MLflow
          • Neptune
          • Paperspace
          • PySpark
          • Ray Serve (Anyscale)
          • SageMaker
            • Batch
            • RealTime
            • Notebook Instance with Greater than 20GB of Data
          • Spell
          • UbiOps
          • Weights & Biases
      • API Reference: ML
        • Python SDK
          • Pandas Batch Logging
            • Client
            • log
            • Schema
            • TypedColumns
            • EmbeddingColumnNames
            • ObjectDetectionColumnNames
            • PromptTemplateColumnNames
            • LLMConfigColumnNames
            • LLMRunMetadataColumnNames
            • NLP_Metrics
            • AutoEmbeddings
            • utils.types.ModelTypes
            • utils.types.Metrics
            • utils.types.Environments
          • Single Record Logging
            • Client
            • log
            • TypedValue
            • Ranking
            • Multi-Class
            • Object Detection
            • Embedding
            • LLMRunMetadata
            • utils.types.ModelTypes
            • utils.types.Metrics
            • utils.types.Environments
        • Java SDK
          • Constructor
          • log
          • bulkLog
          • logValidationRecords
          • logTrainingRecords
        • R SDK
          • Client$new()
          • Client$log()
        • Rest API
    • Computer Vision
      • How to: CV
        • Generate Embeddings
          • How to Generate Your Own Embedding
          • Let Arize Generate Your Embeddings
        • Embedding & Cluster Analyzer
        • ✨Copilot: Embedding Summarization
        • Similarity Search
        • Embedding Drift
        • Embeddings FAQ
      • Integrations: CV
      • Use Cases: CV
        • Image Classification
        • Image Segmentation
        • Object Detection
      • API Reference: CV
Powered by GitBook

Support

  • Chat Us On Slack
  • support@arize.com

Get Started

  • Signup For Free
  • Book A Demo

Copyright © 2025 Arize AI, Inc

On this page
  • Add attributes to a span
  • Add attributes tied to semantic conventions
  • Add attributes to multiple spans at once

Was this helpful?

  1. Observe
  2. Tracing
  3. Add metadata

Add attributes, metadata and tags

Last updated 6 months ago

Was this helpful?

You might want to track additional application details manually. This is particularly useful for actions outside standard frameworks or LLM clients.

Add attributes to a span

Attributes let you attach key/value pairs to a spans so it carries more information about the current operation that it's tracking.

Notice that the attributes have a specific prefix operation. When adding custom attributes, it's best practice to vendor your attributes (e.x. mycompany.) so that your attributes do not clash with semantic conventions.

from opentelemetry import trace

current_span = trace.get_current_span()

current_span.set_attribute("operation.value", 1)
current_span.set_attribute("operation.name", "Saying hello!")
current_span.set_attribute("operation.other-stuff", [1, 2, 3])

You can add attributes to spans in Javascript in multiple ways:

tracer.startActiveSpan(
  'app.new-span',
  { attributes: { attribute1: 'value1' } },
  (span) => {
    // do some work...

    span.end();
  },
);
function chat(message: string, user: User) {
  return tracer.startActiveSpan(`chat:${i}`, (span: Span) => {
    const result = Math.floor(Math.random() * (max - min) + min);

    // Add an attribute to the span
    span.setAttribute('mycompany.userid', user.id);

    span.end();
    return result;
  });
}
// You can set any custom attribute you want
singleAttrSpan.setAttribute("custom_attr", "custom attribute here");

// close the span
singleAttrSpan.end();

Add attributes tied to semantic conventions

provides a structured schema to represent common LLM application attributes. These are well known names for items like messages, prompt templates, metadata, and more. We've built a set of semantic conventions as part of the package.

Setting attributes is crucial for understanding the flow of data and messages through your LLM application, which facilitates easier debugging and analysis. By setting attributes such as OUTPUT_VALUE and OUTPUT_MESSAGES, you can capture essential output details and interaction messages within the context of a span. This allows you to record the response and categorize and store messages exchanged by components in a structured format, which is used in Arize to help you debug your application.

To use OpenInference Semantic Attributes in Python, ensure you have the semantic conventions package:

pip install openinference-semantic-conventions

Then run the following to set semantic attributes:

from openinference.semconv.trace import SpanAttributes

span.set_attribute(SpanAttributes.OUTPUT_VALUE, response)

# This shows up under `output_messages` tab on the span page within Arize
span.set_attribute(
    f"{SpanAttributes.LLM_OUTPUT_MESSAGES}.0.{MessageAttributes.MESSAGE_ROLE}",
    "assistant",
)
span.set_attribute(
    f"{SpanAttributes.LLM_OUTPUT_MESSAGES}.0.{MessageAttributes.MESSAGE_CONTENT}",
    response,
)

First, add both semantic conventions as a dependency to your application:

npm install --save @opentelemetry/semantic-conventions @arizeai/openinfernece-semantic-conventions

Add the following to the top of your application file:

import { SemanticAttributes } from 'arizeai/openinfernece-semantic-conventions';

Finally, you can update your file to include semantic attributes:

const doWork = () => {
  tracer.startActiveSpan('app.doWork', (span) => {
    span.setAttribute(SemanticAttributes.INPUT_VALUE, 'work input');
    // Do some work...

    span.end();
  });
};

For instance, in the chat example from the previous section, we may want to create a span to capture some information about our request before we call out to OpenAI which is auto instrumented using the package.

/*app.ts*/
import { trace } from '@opentelemetry/api';
import express, { Express } from 'express';
import { OpenAI } from "openai";
import {
  MimeType,
  OpenInferenceSpanKind,
  SemanticConventions,
} from "@arizeai/openinference-semantic-conventions";

const tracer = trace.getTracer('llm-server', '0.1.0');

const PORT: number = parseInt(process.env.PORT || '8080');
const app: Express = express();

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

app.get('/chat', (req, res) => {
  const message = req.query.message
  // Start a chain span, this will be the parent of all the work done in this route
  // including the spans created by the OpenAI auto instrumentation package
  tracer.startActiveSpan("chat chain", async (span) => {
     span.setAttributes({
        [SemanticConventions.OPENINFERENCE_SPAN_KIND]:
          OpenInferenceSpanKind.CHAIN,
        [SemanticConventions.INPUT_VALUE]: message,
        [SemanticConventions.INPUT_MIME_TYPE]: MimeType.TEXT,
        // Metadata can be used to store user defined values
        [SemanticConventions.METADATA]: JSON.stringify({
          "userId": req.query.userId,
          "conversationId": req.query.conversationId 
        })
      });
      
    // Will be picked up by auto instrumentation
    const chatCompletion = await openai.chat.completions.create({
      messages: [{ role: "user", content: message }],
      model: "gpt-3.5-turbo",
    });
    
    const response = chatCompletion.choices[0].message;
    span.setAttributes({
        [SemanticConventions.OUTPUT_VALUE]: response,
        [SemanticConventions.OUTPUT_MIME_TYPE]: MimeType.TEXT,
        [`${SemanticConventions.LLM_OUTPUT_MESSAGES}.0.${SemanticConventions.MESSAGE_CONTENT}`]:
          streamedResponse,
        [`${SemanticConventions.LLM_OUTPUT_MESSAGES}.0.${SemanticConventions.MESSAGE_ROLE}`]:
          role,
      });
    span.setStatus({ code: SpanStatusCode.OK });
    // End the span
    span.end();
    res.send(response);
  })
});

app.listen(PORT, () => {
  console.log(`Listening for requests on http://localhost:${PORT}`);
});

This example demonstrates how to use a CHAIN span to wrap our LLM span, allowing additional application data to be tracked. This data can then be analyzed in Arize. For more complex applications, different strategies might be required. Refer to the previous section for detailed guidance on creating and nesting spans effectively.

// Use OpenInference semantic conventions to set reserved attributes 
singleAttrSpan.setAttribute("openinference.span.kind", "CHAIN");
singleAttrSpan.setAttribute("input.value", input);
singleAttrSpan.setAttribute("output.value", output);

Add attributes to multiple spans at once

Supported Context Attributes include:

  • Metadata: Metadata associated with a span.

  • Tags: List of tags to give the span a category.

  • Session ID: Unique identifier for a session.

  • User ID: identifier for a user.

  • Prompt Template:

    • Template: Used to generate prompts as Python f-strings.

    • Version: The version of the prompt template.

    • Variables: key-value pairs applied to the prompt template.

Here are the functions we support to add attributes to context.

pip install openinference-instrumentation

using_metadata

from openinference.instrumentation import using_metadata
metadata = {
    "key-1": value_1,
    "key-2": value_2,
    ...
}
with using_metadata(metadata):
    # Calls within this block will generate spans with the attributes:
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    ...

It can also be used as a decorator:

@using_metadata(metadata)
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    ...

using_tags

from openinference.instrumentation import using_tags
tags = ["tag_1", "tag_2", ...]
with using_tags(tags):
    # Calls within this block will generate spans with the attributes:
    # "tag.tags" = "["tag_1","tag_2",...]"
    ...

It can also be used as a decorator:

@using_tags(tags)
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "tag.tags" = "["tag_1","tag_2",...]"
    ...

using_prompt_template

  • Template: non-empty string.

  • Version: non-empty string.

  • Variables: a dictionary with string keys. This dictionary will be serialized to JSON when saved to the OTEL Context and remain a JSON string when sent as a span attribute.

from openinference.instrumentation import using_prompt_template
prompt_template = "Please describe the weather forecast for {city} on {date}"
prompt_template_variables = {"city": "Johannesburg", "date":"July 11"}
with using_prompt_template(
    template=prompt_template,
    version=prompt_template_variables,
    variables="v1.0",
    ):
    # Calls within this block will generate spans with the attributes:
    # "llm.prompt_template.template" = "Please describe the weather forecast for {city} on {date}"
    # "llm.prompt_template.version" = "v1.0"
    # "llm.prompt_template.variables" = "{\"city\": \"Johannesburg\", \"date\": \"July 11\"}" # JSON serialized
    ...

It can also be used as a decorator:

@using_prompt_template(
    template=prompt_template,
    version=prompt_template_variables,
    variables="v1.0",
)
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "llm.prompt_template.template" = "Please describe the weather forecast for {city} on {date}"
    # "llm.prompt_template.version" = "v1.0"
    # "llm.prompt_template.variables" = "{\"city\": \"Johannesburg\", \"date\": \"July 11\"}" # JSON serialized
    ...

using_attributes

from openinference.instrumentation import using_attributes
tags = ["tag_1", "tag_2", ...]
metadata = {
    "key-1": value_1,
    "key-2": value_2,
    ...
}
prompt_template = "Please describe the weather forecast for {city} on {date}"
prompt_template_variables = {"city": "Johannesburg", "date":"July 11"}
prompt_template_version = "v1.0"
with using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
    metadata=metadata,
    tags=tags,
    prompt_template=prompt_template,
    prompt_template_version=prompt_template_version,
    prompt_template_variables=prompt_template_variables,
):
    # Calls within this block will generate spans with the attributes:
    # "session.id" = "my-session-id"
    # "user.id" = "my-user-id"
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    # "tag.tags" = "["tag_1","tag_2",...]"
    # "llm.prompt_template.template" = "Please describe the weather forecast for {city} on {date}"
    # "llm.prompt_template.variables" = "{\"city\": \"Johannesburg\", \"date\": \"July 11\"}" # JSON serialized
    # "llm.prompt_template.version " = "v1.0"
    ...

The previous example is equivalent to doing the following, making using_attributes a very convenient tool for the more complex settings.

with (
    using_session("my-session-id"),
    using_user("my-user-id"),
    using_metadata(metadata),
    using_tags(tags),
    using_prompt_template(
        template=prompt_template,
        version=prompt_template_version,
        variables=prompt_template_variables,
    ),
):
    # Calls within this block will generate spans with the attributes:
    # "session.id" = "my-session-id"
    # "user.id" = "my-user-id"
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    # "tag.tags" = "["tag_1","tag_2",...]"
    # "llm.prompt_template.template" = "Please describe the weather forecast for {city} on {date}"
    # "llm.prompt_template.variables" = "{\"city\": \"Johannesburg\", \"date\": \"July 11\"}" # JSON serialized
    # "llm.prompt_template.version " = "v1.0"
    ...

It can also be used as a decorator:

@using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
    metadata=metadata,
    tags=tags,
    prompt_template=prompt_template,
    prompt_template_version=prompt_template_version,
    prompt_template_variables=prompt_template_variables,
)
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "session.id" = "my-session-id"
    # "user.id" = "my-user-id"
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    # "tag.tags" = "["tag_1","tag_2",...]"
    # "llm.prompt_template.template" = "Please describe the weather forecast for {city} on {date}"
    # "llm.prompt_template.variables" = "{\"city\": \"Johannesburg\", \"date\": \"July 11\"}" # JSON serialized
    # "llm.prompt_template.version " = "v1.0"
    ...

get_attributes_from_context

In the following example, we assume the following are set in the OTEL context:

tags = ["tag_1", "tag_2"]
metadata = {
    "key-1": 1,
    "key-2": "2",
}
prompt_template = "Please describe the weather forecast for {city} on {date}"
prompt_template_variables = {"city": "Johannesburg", "date":"July 11"}
prompt_template_version = "v1.0"

We then use get_attributes_from_context to extract them from the OTEL context. You can use it in your manual instrumentation to attach these attributes to your spans.

from openinference.instrumentation import get_attributes_from_context

span.set_attributes(dict(get_attributes_from_context()))
# The span will then have the following attributes attached:
# {
#    'session.id': 'my-session-id',
#    'user.id': 'my-user-id',
#    'metadata': '{"key-1": 1, "key-2": "2"}',
#    'tag.tags': ['tag_1', 'tag_2'],
#    'llm.prompt_template.template': 'Please describe the weather forecast for {city} on {date}',
#    'llm.prompt_template.version': 'v1.0',
#    'llm.prompt_template.variables': '{"city": "Johannesburg", "date": "July 11"}'
# }
npm install --save @arizeai/openinference-core @opentelemetry/api

setMetadata

We provide a setMetadata function which allows you to set a metadata attributes on context. Metadata attributes will be serialized to a JSON string when stored on context and will be propagated to spans in the same way.

import { context } from "@opentelemetry/api"
import { setMetadata } from "@openinference-core"

context.with(
  setMetadata(context.active(), { key1: "value1", key2: "value2" }),
  () => {
      // Calls within this block will generate spans with the attributes:
      // "metadata" = '{"key1": "value1", "key2": "value2"}'
  }
)

setTags

We provide a setTags function which allows you to set a list of string tags on context. Tags, like metadata, will be serialized to a JSON string when stored on context and will be propagated to spans in the same way.

import { context } from "@opentelemetry/api"
import { setTags } from "@openinference-core"

context.with(
  setTags(context.active(), ["value1", "value2"]),
  () => {
      // Calls within this block will generate spans with the attributes:
      // "tag.tags" = '["value1", "value2"]'
  }
)

setPromptTemplate

We provide a setPromptTemplate function which allows you to set a template, version, and variables on context. The components of a prompt template are:

  • template - a string with templated variables ex. "hello {{name}}"

  • variables - an object with variable names and their values ex. {name: "world"}

  • version - a string version of the template ex. v1.0

All of these are optional. Application of variables to a template will typically happen before the call to an llm and may not be picked up by auto instrumentation. So, this can be helpful to add to ensure you can see the templates and variables while troubleshooting.

import { context } from "@opentelemetry/api"
import { setPromptTemplate } from "@openinference-core"

context.with(
  setPromptTemplate(
    context.active(),
    { 
      template: "hello {{name}}",
      variables: { name: "world" },
      version: "v1.0"
    }
  ),
  () => {
      // Calls within this block will generate spans with the attributes:
      // "llm.prompt_template.template" = "hello {{name}}"
      // "llm.prompt_template.version" = "v1.0"
      // "llm.prompt_template.variables" = '{ "name": "world" }'
  }
)

setAttributes

import { context } from "@opentelemetry/api"
import { setAttributes } from "@openinference-core"

context.with(
  setAttributes(context.active(), { myAttribute: "test" }),
  () => {
      // Calls within this block will generate spans with the attributes:
      // "myAttribute" = "test"
  }
)

You can also use multiple setters at the same time to propagate multiple attributes to the span below. Since each setter function returns a new context, they can be used together as follows.

import { context } from "@opentelemetry/api"
import { setAttributes } from "@openinference-core"

context.with(
  setAttributes(
    setSession(context.active(), { sessionId: "session-id"}),
    { myAttribute: "test" }
  ),
  () => {
      // Calls within this block will generate spans with the attributes:
      // "myAttribute" = "test"
      // "session.id" = "session-id"
  }
)
import { context } from "@opentelemetry/api"
import { setAttributes } from "@openinference-core"
import { SemanticConventions } from "@arizeai/openinference-semantic-conventions";


context.with(
  setAttributes(
    { [SemanticConventions.SESSION_ID: "session-id" }
  ),
  () => {
      // Calls within this block will generate spans with the attributes:
      // "session.id" = "session-id"
  }
)

getAttributesFromContext

We also provide a utility function: getAttributesFromContext that allows you to pull all of the attributes off of a context. You can then use this to set them on your spans.

import { getAttributesFromContext } from "@arizeai/openinference-core";
import { context, trace } from "@opentelemetry/api"

const contextAttributes = getAttributesFromContext(context.active())
const tracer = trace.getTracer("example")
const span = tracer.startSpan("example span")
span.setAttributes(contextAttributes)
span.end();

This allows you to propagate context attributes to any manually created spans.

You can set attributes once to OpenTelemetry Context, and our will attempt to pass these attributes to all other spans underneath a parent trace.

Context manager to add metadata to the current OpenTelemetry Context. OpenInference auto will read this Context and pass the metadata as a span attribute, following the OpenInference . Its input, the metadata, must be a dictionary with string keys. This dictionary will be serialized to JSON when saved to the OTEL Context and remain a JSON string when sent as a span attribute.

Context manager to add tags to the current OpenTelemetry Context. OpenInference auto will read this Context and pass the tags as a span attribute, following the OpenInference . ts input, the tag list, must be a list of strings.

Context manager to add a prompt template (including its version and variables) to the current OpenTelemetry Context. OpenInference auto will read this Context and pass the prompt template fields as span attributes, following the OpenInference . Its inputs must be of the following type:

Context manager to add attributes to the current OpenTelemetry Context. OpenInference auto will read this Context and pass the attributes fields as span attributes, following the OpenInference . This is a convenient context manager to use if you find yourself using many of the previous ones in conjunction.

Our instrumentation package offers a convenience function, get_attributes_from_context, to read the context attributes set above from OTEL context.

You can use any of the utilities below in conjunction with to set attributes on the active context. OpenInference will then pick up these attributes and add them to any spans created within the context.with callback.

We provide a setAttributes function which allows you to add a set of attributes to context. Attributes set on context using setAttributes must be valid span .

You can also use setAttributes in conjunction with the to set OpenInference attributes manually.

🔭
Semantic Conventions
OpenInference
OpenInference OpenAI
tracing integrations
instrumentators
semantic conventions
instrumentators
semantic conventions
instrumentators
semantic conventions
instrumentators
semantic conventions
OpenInference core
context.with
auto instrumentations
attribute values
OpenInference Semantic Conventions