Adding SessionID and UserID as attributes to Spans for Tracing
What are Sessions?
Demo: debugging sessions in an LLM chatbot application with tracing and evals
A session is a grouping of traces based on a session ID attribute. When building or debugging a chatbot application, being able to see groups of messages or traces belonging to a series of interactions between a human and the AI can be particularly helpful. By adding session.id and user.id as attributes to spans, you can:
Find exactly where a conversation "breaks" or goes off the rails. This can help identify if a user becomes progressively more frustrated or if a chatbot is not helpful.
Find groups of traces where your application is not performing well. Adding session.id and/or user.id from an application enables back-and-forth interactions to be grouped and filtered further.
Construct custom metrics based on evals using session.id or user.id to find best/worst performing sessions and users.
Adding SessionID and UserID
Session and user IDs can be added to a span using auto instrumentation or manual instrumentation of Open Inference. Any LLM call within the context (the with block in the example below) will contain corresponding session.id or user.id as a span attribute. session.id and user.id must be a non-empty string.
When defining your instrumentation, you can pass the sessionID attribute as shown below.
using_session
Context manager to add session ID to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the session ID as a span attribute, following the OpenInference semantic conventions. Its input, the session ID, must be a non-empty string.
from openinference.instrumentation import using_session
with using_session(session_id="my-session-id"):
# Calls within this block will generate spans with the attributes:
# "session.id" = "my-session-id"
...
It can also be used as a decorator:
@using_session(session_id="my-session-id")
def call_fn(*args, **kwargs):
# Calls within this function will generate spans with the attributes:
# "session.id" = "my-session-id"
...
using_user
Context manager to add user ID to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the user ID as a span attribute, following the OpenInference semantic conventions. Its input, the user ID, must be a non-empty string.
from openinference.instrumentation import using_user
with using_user("my-user-id"):
# Calls within this block will generate spans with the attributes:
# "user.id" = "my-user-id"
...
It can also be used as a decorator:
@using_user("my-user-id")
def call_fn(*args, **kwargs):
# Calls within this function will generate spans with the attributes:
# "user.id" = "my-user-id"
...
We provide a setSession function which allows you to set a sessionId on context. You can use this utility in conjunction with context.with to set the active context. OpenInference auto instrumentations will then pick up these attributes and add them to any spans created within the context.with callback.
import { context } from "@opentelemetry/api"
import { setSession } from "@openinference-core"
context.with(
setSession(context.active(), { sessionId: "session-id" }),
() => {
// Calls within this block will generate spans with the attributes:
// "session.id" = "session-id"
}
)
We also provide a setUser function which allows you to set a userId on context. You can use this utility in conjunction with context.with to set the active context. OpenInference auto instrumentations will then pick up these attributes and add them to any spans created within the context.with callback.
import { context } from "@opentelemetry/api"
import { setUser } from "@openinference-core"
context.with(
setUser(context.active(), { userId: "user-id" }),
() => {
// Calls within this block will generate spans with the attributes:
// "user.id" = "user-id"
}
)
Once you define your OpenAI client, any call inside our context managers will attach the corresponding attributes to the spans.
import openai
from openinference.instrumentation import using_attributes
client = openai.OpenAI()
# Defining a Session
with using_attributes(session_id="my-session-id"):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Write a haiku."}],
max_tokens=20,
)
# Defining a User
with using_attributes(user_id="my-user-id"):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Write a haiku."}],
max_tokens=20,
)
# Defining a Session AND a User
with using_attributes(
session_id="my-session-id",
user_id="my-user-id",
):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Write a haiku."}],
max_tokens=20,
)
Alternatively, if you wrap your calls inside functions, you can use them as decorators:
from openinference.instrumentation import using_attributes
client = openai.OpenAI()
# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(client, *args, **kwargs):
return client.chat.completions.create(*args, **kwargs)
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(client, *args, **kwargs):
return client.chat.completions.create(*args, **kwargs)
# Defining a Session AND a User
@using_attributes(
session_id="my-session-id",
user_id="my-user-id",
)
def call_fn(client, *args, **kwargs):
return client.chat.completions.create(*args, **kwargs)
Once you define your LangChain client, any call inside our context managers will attach the corresponding attributes to the spans.
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
from openinference.instrumentation import using_attributes
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
llm = LLMChain(llm=OpenAI(), prompt=prompt, metadata={"category": "jokes"})
# Defining a Session
with using_attributes(session_id="my-session-id"):
response = llm.predict(adjective="funny")
# Defining a User
with using_attributes(user_id="my-user-id"):
response = llm.predict(adjective="funny")
# Defining a Session AND a User
with using_attributes(
session_id="my-session-id",
user_id="my-user-id",
):
response = llm.predict(adjective="funny")
Alternatively, if you wrap your calls inside functions, you can use them as decorators:
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
from openinference.instrumentation import using_attributes
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
llm = LLMChain(llm=OpenAI(), prompt=prompt, metadata={"category": "jokes"})
# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(llm, *args, **kwargs):
return llm.complete(*args, **kwargs)
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(llm, *args, **kwargs):
return llm.complete(*args, **kwargs)
# Defining a Session AND a User
@using_attributes(
session_id="my-session-id",
user_id="my-user-id",
)
def call_fn(llm, *args, **kwargs):
return llm.complete(*args, **kwargs)
Once you define your LlamaIndex client, any call inside our context managers will attach the corresponding attributes to the spans.
from llama_index.core.chat_engine import SimpleChatEngine
from openinference.instrumentation import using_attributes
chat_engine = SimpleChatEngine.from_defaults()
# Defining a Session
with using_attributes(session_id="my-session-id"):
response = chat_engine.chat(
"Say something profound and romantic about fourth of July"
)
# Defining a User
with using_attributes(user_id="my-user-id"):
response = chat_engine.chat(
"Say something profound and romantic about fourth of July"
)
# Defining a Session AND a User
with using_attributes(
session_id="my-session-id",
user_id="my-user-id",
):
response = chat_engine.chat(
"Say something profound and romantic about fourth of July"
)
Alternatively, if you wrap your calls inside functions, you can use them as decorators:
from llama_index.core.chat_engine import SimpleChatEngine
from openinference.instrumentation import using_attributes
chat_engine = SimpleChatEngine.from_defaults()
# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(chat_engine, *args, **kwargs):
return chat_engine.chat(
"Say something profound and romantic about fourth of July"
)
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(chat_engine, *args, **kwargs):
return chat_engine.chat(
"Say something profound and romantic about fourth of July"
)
# Defining a Session AND a User
@using_attributes(
session_id="my-session-id",
user_id="my-user-id",
)
def call_fn(chat_engine, *args, **kwargs):
return chat_engine.chat(
"Say something profound and romantic about fourth of July"
)
Once you define your Mistral client, any call inside our context managers will attach the corresponding attributes to the spans.
from mistralai.client import MistralClient
from openinference.instrumentation import using_attributes
client = MistralClient()
# Defining a Session
with using_attributes(session_id="my-session-id"):
response = client.chat(
model="mistral-large-latest",
messages=[
ChatMessage(
content="Who won the World Cup in 2018?",
role="user",
)
],
)
# Defining a User
with using_attributes(user_id="my-user-id"):
response = client.chat(
model="mistral-large-latest",
messages=[
ChatMessage(
content="Who won the World Cup in 2018?",
role="user",
)
],
)
# Defining a Session AND a User
with using_attributes(
session_id="my-session-id",
user_id="my-user-id",
):
response = client.chat(
model="mistral-large-latest",
messages=[
ChatMessage(
content="Who won the World Cup in 2018?",
role="user",
)
],
)
Alternatively, if you wrap your calls inside functions, you can use them as decorators:
from mistralai.client import MistralClient
from openinference.instrumentation import using_attributes
client = MistralClient()
# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(client, *args, **kwargs):
return client.chat(*args, **kwargs)
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(client, *args, **kwargs):
return client.chat(*args, **kwargs)
# Defining a Session AND a User
@using_attributes(
session_id="my-session-id",
user_id="my-user-id",
)
def call_fn(client, *args, **kwargs):
return client.chat(*args, **kwargs)
Once you define your DSPy predictor, any call inside our context managers will attach the corresponding attributes to the spans.
import dspy
from openinference.instrumentation import using_attributes
class BasicQA(dspy.Signature):
"""Answer questions with short factoid answers."""
question = dspy.InputField()
answer = dspy.OutputField(desc="often between 1 and 5 words")
turbo = dspy.OpenAI(model="gpt-3.5-turbo")
dspy.settings.configure(lm=turbo)
predictor = dspy.Predict(BasicQA) # Define the predictor.
# Defining a Session
with using_attributes(session_id="my-session-id"):
response = predictor(
question="What is the capital of the united states?"
)
# Defining a User
with using_attributes(user_id="my-user-id"):
response = predictor(
question="What is the capital of the united states?"
)
# Defining a Session AND a User
with using_attributes(
session_id="my-session-id",
user_id="my-user-id",
):
response = predictor(
question="What is the capital of the united states?"
)
Alternatively, if you wrap your calls inside functions, you can use them as decorators:
import dspy
from openinference.instrumentation import using_attributes
# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(predictor, *args, **kwargs):
return predictor(*args,**kwargs)
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(predictor, *args, **kwargs):
return predictor(*args,**kwargs)
# Defining a Session AND a User
@using_attributes(
session_id="my-session-id",
user_id="my-user-id",
)
def call_fn(predictor, *args, **kwargs):
return predictor(*args,**kwargs)
To access an applications sessions in the platform, select "Sessions" from the left nav.