LogoLogo
Python SDKSlack
  • Documentation
  • Cookbooks
  • Self-Hosting
  • Release Notes
  • Reference
  • Arize AI
  • Quickstarts
  • ✨Arize Copilot
  • Arize AI for Agents
  • Concepts
    • Agent Evaluation
    • Tracing
      • What is OpenTelemetry?
      • What is OpenInference?
      • Openinference Semantic Conventions
    • Evaluation
  • 🧪Develop
    • Quickstart: Experiments
    • Datasets
      • Create a dataset
      • Update a dataset
      • Export a dataset
    • Experiments
      • Run experiments
      • Run experiments with code
        • Experiments SDK differences in AX vs Phoenix
        • Log experiment results via SDK
      • Evaluate experiments
      • Evaluate experiment with code
      • CI/CD with experiments
        • Github Action Basics
        • Gitlab CI/CD Basics
      • Download experiment
    • Prompt Playground
      • Use tool calling
      • Use image inputs
      • Replay spans
      • Compare prompts side-by-side
      • Load a dataset into playground
      • Save playground outputs as an experiment
      • ✨Copilot: prompt builder
    • Playground Integrations
      • OpenAI
      • Azure OpenAI
      • AWS Bedrock
      • VertexAI
      • Custom LLM Models
    • Prompt Hub
  • 🧠Evaluate
    • Online Evals
      • Run evaluations in the UI
      • Run evaluations with code
      • Test LLM evaluator in playground
      • View task details & logs
      • ✨Copilot: Eval Builder
      • ✨Copilot: Eval Analysis
      • ✨Copilot: RAG Analysis
    • Experiment Evals
    • LLM as a Judge
      • Custom Eval Templates
      • Arize Templates
        • Agent Tool Calling
        • Agent Tool Selection
        • Agent Parameter Extraction
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Hallucinations
        • Q&A on Retrieved Data
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Citation
        • User Frustration
        • SQL Generation
    • Code Evaluations
    • Human Annotations
  • 🔭Observe
    • Quickstart: Tracing
    • Tracing
      • Setup tracing
      • Trace manually
        • Trace inputs and outputs
        • Trace function calls
        • Trace LLM, Retriever and Tool Spans
        • Trace prompt templates & variables
        • Trace as Inferences
        • Send Traces from Phoenix -> Arize
        • Advanced Tracing (OTEL) Examples
      • Add metadata
        • Add events, exceptions and status
        • Logging Latent Metadata
        • Add attributes, metadata and tags
        • Send data to a specific project
        • Get the current span context and tracer
      • Configure tracing options
        • Configure OTEL tracer
        • Mask span attributes
        • Redact sensitive data from traces
        • Instrument with OpenInference helpers
      • Query traces
        • Filter Traces
          • Time Filtering
        • Export Traces
        • ✨AI Powered Search & Filter
        • ✨AI Powered Trace Analysis
        • ✨AI Span Analysis & Evaluation
    • Tracing Integrations
      • OpenAI
      • OpenAI Agents SDK
      • LlamaIndex
      • LlamaIndex Workflows
      • LangChain
      • LangGraph
      • Hugging Face smolagents
      • Autogen
      • Google GenAI (Gemini)
      • Model Context Protocol (MCP)
      • Vertex AI
      • Amazon Bedrock
      • Amazon Bedrock Agents
      • MistralAI
      • Anthropic
      • LangFlow
      • Haystack
      • LiteLLM
      • CrewAI
      • Groq
      • DSPy
      • Guardrails AI
      • Prompt flow
      • Vercel AI SDK
      • Llama
      • Together AI
      • OpenTelemetry (arize-otel)
      • BeeAI
    • Evals on Traces
    • Guardrails
    • Sessions
    • Dashboards
      • Dashboard Widgets
      • Tracking Token Usage
      • ✨Copilot: Dashboard Widget Creation
    • Monitors
      • Integrations: Monitors
        • Slack
          • Manual Setup
        • OpsGenie
        • PagerDuty
      • LLM Red Teaming
    • Custom Metrics & Analytics
      • Arize Query Language Syntax
        • Conditionals and Filters
        • All Operators
        • All Functions
      • Custom Metric Examples
      • ✨Copilot: ArizeQL Generator
  • 📈Machine Learning
    • Machine Learning
      • User Guide: ML
      • Quickstart: ML
      • Concepts: ML
        • What Is A Model Schema
        • Delayed Actuals and Tags
        • ML Glossary
      • How To: ML
        • Upload Data to Arize
          • Pandas SDK Example
          • Local File Upload
            • File Upload FAQ
          • Table Ingestion Tuning
          • Wildcard Paths for Cloud Storage
          • Troubleshoot Data Upload
          • Sending Data FAQ
        • Monitors
          • ML Monitor Types
          • Configure Monitors
            • Notifications Providers
          • Programmatically Create Monitors
          • Best Practices for Monitors
        • Dashboards
          • Dashboard Widgets
          • Dashboard Templates
            • Model Performance
            • Pre-Production Performance
            • Feature Analysis
            • Drift
          • Programmatically Create Dashboards
        • Performance Tracing
          • Time Filtering
          • ✨Copilot: Performance Insights
        • Drift Tracing
          • ✨Copilot: Drift Insights
          • Data Distribution Visualization
          • Embeddings for Tabular Data (Multivariate Drift)
        • Custom Metrics
          • Arize Query Language Syntax
            • Conditionals and Filters
            • All Operators
            • All Functions
          • Custom Metric Examples
          • Custom Metrics Query Language
          • ✨Copilot: ArizeQL Generator
        • Troubleshoot Data Quality
          • ✨Copilot: Data Quality Insights
        • Explainability
          • Interpreting & Analyzing Feature Importance Values
          • SHAP
          • Surrogate Model
          • Explainability FAQ
          • Model Explainability
        • Bias Tracing (Fairness)
        • Export Data to Notebook
        • Automate Model Retraining
        • ML FAQ
      • Use Cases: ML
        • Binary Classification
          • Fraud
          • Insurance
        • Multi-Class Classification
        • Regression
          • Lending
          • Customer Lifetime Value
          • Click-Through Rate
        • Timeseries Forecasting
          • Demand Forecasting
          • Churn Forecasting
        • Ranking
          • Collaborative Filtering
          • Search Ranking
        • Natural Language Processing (NLP)
        • Common Industry Use Cases
      • Integrations: ML
        • Google BigQuery
          • GBQ Views
          • Google BigQuery FAQ
        • Snowflake
          • Snowflake Permissions Configuration
        • Databricks
        • Google Cloud Storage (GCS)
        • Azure Blob Storage
        • AWS S3
          • Private Image Link Access Via AWS S3
        • Kafka
        • Airflow Retrain
        • Amazon EventBridge Retrain
        • MLOps Partners
          • Algorithmia
          • Anyscale
          • Azure & Databricks
          • BentoML
          • CML (DVC)
          • Deepnote
          • Feast
          • Google Cloud ML
          • Hugging Face
          • LangChain 🦜🔗
          • MLflow
          • Neptune
          • Paperspace
          • PySpark
          • Ray Serve (Anyscale)
          • SageMaker
            • Batch
            • RealTime
            • Notebook Instance with Greater than 20GB of Data
          • Spell
          • UbiOps
          • Weights & Biases
      • API Reference: ML
        • Python SDK
          • Pandas Batch Logging
            • Client
            • log
            • Schema
            • TypedColumns
            • EmbeddingColumnNames
            • ObjectDetectionColumnNames
            • PromptTemplateColumnNames
            • LLMConfigColumnNames
            • LLMRunMetadataColumnNames
            • NLP_Metrics
            • AutoEmbeddings
            • utils.types.ModelTypes
            • utils.types.Metrics
            • utils.types.Environments
          • Single Record Logging
            • Client
            • log
            • TypedValue
            • Ranking
            • Multi-Class
            • Object Detection
            • Embedding
            • LLMRunMetadata
            • utils.types.ModelTypes
            • utils.types.Metrics
            • utils.types.Environments
        • Java SDK
          • Constructor
          • log
          • bulkLog
          • logValidationRecords
          • logTrainingRecords
        • R SDK
          • Client$new()
          • Client$log()
        • Rest API
    • Computer Vision
      • How to: CV
        • Generate Embeddings
          • How to Generate Your Own Embedding
          • Let Arize Generate Your Embeddings
        • Embedding & Cluster Analyzer
        • ✨Copilot: Embedding Summarization
        • Similarity Search
        • Embedding Drift
        • Embeddings FAQ
      • Integrations: CV
      • Use Cases: CV
        • Image Classification
        • Image Segmentation
        • Object Detection
      • API Reference: CV
Powered by GitBook

Support

  • Chat Us On Slack
  • support@arize.com

Get Started

  • Signup For Free
  • Book A Demo

Copyright © 2025 Arize AI, Inc

On this page
  • Adding SessionID and UserID
  • Additional Examples

Was this helpful?

  1. Observe

Sessions

Adding SessionID and UserID as attributes to Spans for Tracing

Last updated 8 days ago

Was this helpful?

A session is a grouping of traces based on a session ID attribute. When building or debugging a chatbot application, being able to see groups of messages or traces belonging to a series of interactions between a human and the AI can be particularly helpful. By adding session.id and user.id as attributes to spans, you can:

  • Find exactly where a conversation "breaks" or goes off the rails. This can help identify if a user becomes progressively more frustrated or if a chatbot is not helpful.

  • Find groups of traces where your application is not performing well. Adding session.id and/or user.id from an application enables back-and-forth interactions to be grouped and filtered further.

  • Construct custom metrics based on evals using session.id or user.id to find best/worst performing sessions and users.

Adding SessionID and UserID

Session and user IDs can be added to a span using auto instrumentation or manual instrumentation of Open Inference. Any LLM call within the context (the with block in the example below) will contain corresponding session.id or user.id as a span attribute. session.id and user.id must be a non-empty string.

When defining your instrumentation, you can pass the sessionID attribute as shown below.

using_session

from openinference.instrumentation import using_session

with using_session(session_id="my-session-id"):
    # Calls within this block will generate spans with the attributes:
    # "session.id" = "my-session-id"
    ...

It can also be used as a decorator:

@using_session(session_id="my-session-id")
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "session.id" = "my-session-id"
    ...

using_user

from openinference.instrumentation import using_user
with using_user("my-user-id"):
    # Calls within this block will generate spans with the attributes:
    # "user.id" = "my-user-id"
    ...

It can also be used as a decorator:

@using_user("my-user-id")
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "user.id" = "my-user-id"
    ...
npm install --save @arizeai/openinference-core @opentelemetry/api
import { context } from "@opentelemetry/api"
import { setSession } from "@openinference-core"

context.with(
  setSession(context.active(), { sessionId: "session-id" }),
  () => {
      // Calls within this block will generate spans with the attributes:
      // "session.id" = "session-id"
  }
)
import { context } from "@opentelemetry/api"
import { setUser } from "@openinference-core"

context.with(
  setUser(context.active(), { userId: "user-id" }),
  () => {
      // Calls within this block will generate spans with the attributes:
      // "user.id" = "user-id"
  }
)

Additional Examples

Requires pip install openinference-instrumentation-openai

Once you define your OpenAI client, any call inside our context managers will attach the corresponding attributes to the spans.

import openai
from openinference.instrumentation import using_attributes

client = openai.OpenAI()

# Defining a Session
with using_attributes(session_id="my-session-id"):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "Write a haiku."}],
        max_tokens=20,
    )

# Defining a User
with using_attributes(user_id="my-user-id"):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "Write a haiku."}],
        max_tokens=20,
    )
    
# Defining a Session AND a User
with using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "Write a haiku."}],
        max_tokens=20,
    )

Alternatively, if you wrap your calls inside functions, you can use them as decorators:

from openinference.instrumentation import using_attributes

client = openai.OpenAI()

# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(client, *args, **kwargs):
    return client.chat.completions.create(*args, **kwargs)
    
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(client, *args, **kwargs):
    return client.chat.completions.create(*args, **kwargs)

# Defining a Session AND a User
@using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
)
def call_fn(client, *args, **kwargs):
    return client.chat.completions.create(*args, **kwargs)

Requires pip install openinference-instrumentation-langchain

Once you define your LangChain client, any call inside our context managers will attach the corresponding attributes to the spans.

from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
from openinference.instrumentation import using_attributes

prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
llm = LLMChain(llm=OpenAI(), prompt=prompt, metadata={"category": "jokes"})

# Defining a Session
with using_attributes(session_id="my-session-id"):
    response = llm.predict(adjective="funny")

# Defining a User
with using_attributes(user_id="my-user-id"):
    response = llm.predict(adjective="funny")
    
# Defining a Session AND a User
with using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
):
    response = llm.predict(adjective="funny")

Alternatively, if you wrap your calls inside functions, you can use them as decorators:

from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
from openinference.instrumentation import using_attributes

prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
llm = LLMChain(llm=OpenAI(), prompt=prompt, metadata={"category": "jokes"})

# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(llm, *args, **kwargs):
    return llm.complete(*args, **kwargs)
    
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(llm, *args, **kwargs):
    return llm.complete(*args, **kwargs)

# Defining a Session AND a User
@using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
)
def call_fn(llm, *args, **kwargs):
    return llm.complete(*args, **kwargs)

Requires pip install openinference-instrumentation-llama-index

Once you define your LlamaIndex client, any call inside our context managers will attach the corresponding attributes to the spans.

from llama_index.core.chat_engine import SimpleChatEngine
from openinference.instrumentation import using_attributes

chat_engine = SimpleChatEngine.from_defaults()

# Defining a Session
with using_attributes(session_id="my-session-id"):
    response = chat_engine.chat(
        "Say something profound and romantic about fourth of July"
    )

# Defining a User
with using_attributes(user_id="my-user-id"):
    response = chat_engine.chat(
        "Say something profound and romantic about fourth of July"
    )
    
# Defining a Session AND a User
with using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
):
    response = chat_engine.chat(
        "Say something profound and romantic about fourth of July"
    )

Alternatively, if you wrap your calls inside functions, you can use them as decorators:

from llama_index.core.chat_engine import SimpleChatEngine
from openinference.instrumentation import using_attributes

chat_engine = SimpleChatEngine.from_defaults()

# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(chat_engine, *args, **kwargs):
    return chat_engine.chat(
        "Say something profound and romantic about fourth of July"
    )
    
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(chat_engine, *args, **kwargs):
    return chat_engine.chat(
        "Say something profound and romantic about fourth of July"
    )

# Defining a Session AND a User
@using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
)
def call_fn(chat_engine, *args, **kwargs):
    return chat_engine.chat(
        "Say something profound and romantic about fourth of July"
    )
    

Requires pip install openinference-instrumentation-bedrock

Once you define your boto3 session client, any call inside our context managers will attach the corresponding attributes to the spans.

import boto3
from openinference.instrumentation import using_attributes

session = boto3.session.Session()
client = session.client("bedrock-runtime", region_name="us-west-2")

# Defining a Session
with using_attributes(session_id="my-session-id"):
    response = client.invoke_model(
        modelId="anthropic.claude-v2",
        body=(
            b'{"prompt": "Human: Hello there, how are you? Assistant:", "max_tokens_to_sample": 1024}'
        )
    )

# Defining a User
with using_attributes(user_id="my-user-id"):
    response = client.invoke_model(
        modelId="anthropic.claude-v2",
        body=(
            b'{"prompt": "Human: Hello there, how are you? Assistant:", "max_tokens_to_sample": 1024}'
        )
    )
    
# Defining a Session AND a User
with using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
):
    response = client.invoke_model(
        modelId="anthropic.claude-v2",
        body=(
            b'{"prompt": "Human: Hello there, how are you? Assistant:", "max_tokens_to_sample": 1024}'
        )
    )

Alternatively, if you wrap your calls inside functions, you can use them as decorators:

import boto3
from openinference.instrumentation import using_attributes

session = boto3.session.Session()
client = session.client("bedrock-runtime", region_name="us-west-2")

# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(client, *args, **kwargs):
    return client.invoke_model(*args, **kwargs)
    
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(client, *args, **kwargs):
    return client.invoke_model(*args, **kwargs)

# Defining a Session AND a User
@using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
)
def call_fn(client, *args, **kwargs):
    return client.invoke_model(*args, **kwargs)

Requires pip install openinference-instrumentation-mistralai

Once you define your Mistral client, any call inside our context managers will attach the corresponding attributes to the spans.

from mistralai.client import MistralClient
from openinference.instrumentation import using_attributes

client = MistralClient()

# Defining a Session
with using_attributes(session_id="my-session-id"):
    response = client.chat(
        model="mistral-large-latest",
        messages=[
            ChatMessage(
                content="Who won the World Cup in 2018?",
                role="user",
            )
        ],
    )

# Defining a User
with using_attributes(user_id="my-user-id"):
    response = client.chat(
        model="mistral-large-latest",
        messages=[
            ChatMessage(
                content="Who won the World Cup in 2018?",
                role="user",
            )
        ],
    )
    
# Defining a Session AND a User
with using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
):
    response = client.chat(
        model="mistral-large-latest",
        messages=[
            ChatMessage(
                content="Who won the World Cup in 2018?",
                role="user",
            )
        ],
    )

Alternatively, if you wrap your calls inside functions, you can use them as decorators:

from mistralai.client import MistralClient
from openinference.instrumentation import using_attributes

client = MistralClient()

# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(client, *args, **kwargs):
    return client.chat(*args, **kwargs)
    
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(client, *args, **kwargs):
    return client.chat(*args, **kwargs)

# Defining a Session AND a User
@using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
)
def call_fn(client, *args, **kwargs):
    return client.chat(*args, **kwargs)

Requires pip install openinference-instrumentation-dspy

Once you define your DSPy predictor, any call inside our context managers will attach the corresponding attributes to the spans.

import dspy
from openinference.instrumentation import using_attributes

class BasicQA(dspy.Signature):
    """Answer questions with short factoid answers."""

    question = dspy.InputField()
    answer = dspy.OutputField(desc="often between 1 and 5 words")

turbo = dspy.OpenAI(model="gpt-3.5-turbo")
dspy.settings.configure(lm=turbo)
predictor = dspy.Predict(BasicQA) # Define the predictor.

# Defining a Session
with using_attributes(session_id="my-session-id"):
    response = predictor(
        question="What is the capital of the united states?"
    )  

# Defining a User
with using_attributes(user_id="my-user-id"):
    response = predictor(
        question="What is the capital of the united states?"
    )  
    
# Defining a Session AND a User
with using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
):
    response = predictor(
        question="What is the capital of the united states?"
    )  

Alternatively, if you wrap your calls inside functions, you can use them as decorators:

import dspy
from openinference.instrumentation import using_attributes

# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(predictor, *args, **kwargs):
    return predictor(*args,**kwargs)  
    
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(predictor, *args, **kwargs):
    return predictor(*args,**kwargs)  

# Defining a Session AND a User
@using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
)
def call_fn(predictor, *args, **kwargs):
    return predictor(*args,**kwargs)  

To access an applications sessions in the platform, select "Sessions" from the left nav.

Context manager to add session ID to the current OpenTelemetry Context. OpenInference auto will read this Context and pass the session ID as a span attribute, following the OpenInference . Its input, the session ID, must be a non-empty string.

Context manager to add user ID to the current OpenTelemetry Context. OpenInference auto will read this Context and pass the user ID as a span attribute, following the OpenInference . Its input, the user ID, must be a non-empty string.

We provide a setSession function which allows you to set a sessionId on context. You can use this utility in conjunction with to set the active context. OpenInference will then pick up these attributes and add them to any spans created within the context.with callback.

We also provide a setUser function which allows you to set a userId on context. You can use this utility in conjunction with to set the active context. OpenInference will then pick up these attributes and add them to any spans created within the context.with callback.

🔭
instrumentators
semantic conventions
instrumentators
semantic conventions
context.with
auto instrumentations
context.with
auto instrumentations
Demo: debugging sessions in an LLM chatbot application with tracing and evals