LangChain

How to use the python LangChainInstrumentor to trace LangChain and LangGraph

Phoenix has first-class support for LangChain applications.

Launch Phoenix

Install packages:

pip install arize-phoenix

Launch Phoenix:

import phoenix as px
px.launch_app()

Connect your notebook to Phoenix:

from phoenix.otel import register

tracer_provider = register(
  project_name="my-llm-app", # Default is 'default'
)

By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See Persistence or use one of the other deployment options to retain traces.

Install

pip install openinference-instrumentation-langchain

Setup

Initialize the LangChainInstrumentor before your application code.

from openinference.instrumentation.langchain import LangChainInstrumentor

LangChainInstrumentor().instrument(tracer_provider=tracer_provider)

Run LangChain

By instrumenting LangChain, spans will be created whenever a a chain is run and will be sent to the Phoenix server for collection.

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

prompt = ChatPromptTemplate.from_template("{x} {y} {z}?").partial(x="why is", z="blue")
chain = prompt | ChatOpenAI(model_name="gpt-3.5-turbo")
chain.invoke(dict(y="sky"))

Observe

Now that you have tracing setup, all invocations of chains will be streamed to your running Phoenix for observability and evaluation.

Resources

Last updated