Tracing an Agent

The tracing of an agent can occur through a couple different options.

We support tracing for common frameworks using auto-instrumentation:

We also support manually tracing agents for custom agent instrumentation. Manual agent instrumentation is typically involves instrumentation one or multiple LLM routers and a set of tool calls or skills that can be laddered together.

Example of manual instrumentation of an Agent in a notebook below:

The above diagram is an example of an assistant that is a single layer of a LLM router with function calls where inside the function call an LLM is used.

The agent Span Kind is normally used as the top level span on entry to a iterative processing step.

# Import the automatic instrumentor from OpenInference
from openinference.instrumentation.openai import OpenAIInstrumentor

# Finish automatic instrumentation
OpenAIInstrumentor().instrument()

def agent_router(input):
    # Obtain a tracer instance
    tracer = trace.get_tracer(__name__)
    with tracer.start_as_current_span("AgentOperation", attributes={
    SpanAttributes.OPENINFERENCE_SPAN_KIND: OpenInferenceSpanKindValues.AGENT.value}) as span:
      response = client.chat.completions.create(
          model=TASK_MODEL,
          temperature=0,
          functions=functions,
          messages=[
              {
                  "role": "system",
                  "content": " ",
              },
              {
                  "role": "user",
                  "content": input['questions'],
              },
          ],
      )

The example above is of a single 1 layer router with an LLM as the function call.

Last updated

Copyright © 2023 Arize AI, Inc