LogoLogo
Python SDKSlack
  • Documentation
  • Cookbooks
  • Self-Hosting
  • Release Notes
  • Reference
  • Arize AI
  • Quickstarts
  • ✨Arize Copilot
  • Arize AI for Agents
  • Concepts
    • Agent Evaluation
    • Tracing
      • What is OpenTelemetry?
      • What is OpenInference?
      • Openinference Semantic Conventions
    • Evaluation
  • 🧪Develop
    • Quickstart: Experiments
    • Datasets
      • Create a dataset
      • Update a dataset
      • Export a dataset
    • Experiments
      • Run experiments
      • Run experiments with code
        • Experiments SDK differences in AX vs Phoenix
        • Log experiment results via SDK
      • Evaluate experiments
      • Evaluate experiment with code
      • CI/CD with experiments
        • Github Action Basics
        • Gitlab CI/CD Basics
      • Download experiment
    • Prompt Playground
      • Use tool calling
      • Use image inputs
      • Replay spans
      • Compare prompts side-by-side
      • Load a dataset into playground
      • Save playground outputs as an experiment
      • ✨Copilot: prompt builder
    • Playground Integrations
      • OpenAI
      • Azure OpenAI
      • AWS Bedrock
      • VertexAI
      • Custom LLM Models
    • Prompt Hub
  • 🧠Evaluate
    • Online Evals
      • Run evaluations in the UI
      • Run evaluations with code
      • Test LLM evaluator in playground
      • View task details & logs
      • ✨Copilot: Eval Builder
      • ✨Copilot: Eval Analysis
      • ✨Copilot: RAG Analysis
    • Experiment Evals
    • LLM as a Judge
      • Custom Eval Templates
      • Arize Templates
        • Agent Tool Calling
        • Agent Tool Selection
        • Agent Parameter Extraction
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Hallucinations
        • Q&A on Retrieved Data
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Citation
        • User Frustration
        • SQL Generation
    • Code Evaluations
    • Human Annotations
  • 🔭Observe
    • Quickstart: Tracing
    • Tracing
      • Setup tracing
      • Trace manually
        • Trace inputs and outputs
        • Trace function calls
        • Trace LLM, Retriever and Tool Spans
        • Trace prompt templates & variables
        • Trace as Inferences
        • Send Traces from Phoenix -> Arize
        • Advanced Tracing (OTEL) Examples
      • Add metadata
        • Add events, exceptions and status
        • Logging Latent Metadata
        • Add attributes, metadata and tags
        • Send data to a specific project
        • Get the current span context and tracer
      • Configure tracing options
        • Configure OTEL tracer
        • Mask span attributes
        • Redact sensitive data from traces
        • Instrument with OpenInference helpers
      • Query traces
        • Filter Traces
          • Time Filtering
        • Export Traces
        • ✨AI Powered Search & Filter
        • ✨AI Powered Trace Analysis
        • ✨AI Span Analysis & Evaluation
    • Tracing Integrations
      • OpenAI
      • OpenAI Agents SDK
      • LlamaIndex
      • LlamaIndex Workflows
      • LangChain
      • LangGraph
      • Hugging Face smolagents
      • Autogen
      • Google GenAI (Gemini)
      • Model Context Protocol (MCP)
      • Vertex AI
      • Amazon Bedrock
      • Amazon Bedrock Agents
      • MistralAI
      • Anthropic
      • LangFlow
      • Haystack
      • LiteLLM
      • CrewAI
      • Groq
      • DSPy
      • Guardrails AI
      • Prompt flow
      • Vercel AI SDK
      • Llama
      • Together AI
      • OpenTelemetry (arize-otel)
      • BeeAI
    • Evals on Traces
    • Guardrails
    • Sessions
    • Dashboards
      • Dashboard Widgets
      • Tracking Token Usage
      • ✨Copilot: Dashboard Widget Creation
    • Monitors
      • Integrations: Monitors
        • Slack
          • Manual Setup
        • OpsGenie
        • PagerDuty
      • LLM Red Teaming
    • Custom Metrics & Analytics
      • Arize Query Language Syntax
        • Conditionals and Filters
        • All Operators
        • All Functions
      • Custom Metric Examples
      • ✨Copilot: ArizeQL Generator
  • 📈Machine Learning
    • Machine Learning
      • User Guide: ML
      • Quickstart: ML
      • Concepts: ML
        • What Is A Model Schema
        • Delayed Actuals and Tags
        • ML Glossary
      • How To: ML
        • Upload Data to Arize
          • Pandas SDK Example
          • Local File Upload
            • File Upload FAQ
          • Table Ingestion Tuning
          • Wildcard Paths for Cloud Storage
          • Troubleshoot Data Upload
          • Sending Data FAQ
        • Monitors
          • ML Monitor Types
          • Configure Monitors
            • Notifications Providers
          • Programmatically Create Monitors
          • Best Practices for Monitors
        • Dashboards
          • Dashboard Widgets
          • Dashboard Templates
            • Model Performance
            • Pre-Production Performance
            • Feature Analysis
            • Drift
          • Programmatically Create Dashboards
        • Performance Tracing
          • Time Filtering
          • ✨Copilot: Performance Insights
        • Drift Tracing
          • ✨Copilot: Drift Insights
          • Data Distribution Visualization
          • Embeddings for Tabular Data (Multivariate Drift)
        • Custom Metrics
          • Arize Query Language Syntax
            • Conditionals and Filters
            • All Operators
            • All Functions
          • Custom Metric Examples
          • Custom Metrics Query Language
          • ✨Copilot: ArizeQL Generator
        • Troubleshoot Data Quality
          • ✨Copilot: Data Quality Insights
        • Explainability
          • Interpreting & Analyzing Feature Importance Values
          • SHAP
          • Surrogate Model
          • Explainability FAQ
          • Model Explainability
        • Bias Tracing (Fairness)
        • Export Data to Notebook
        • Automate Model Retraining
        • ML FAQ
      • Use Cases: ML
        • Binary Classification
          • Fraud
          • Insurance
        • Multi-Class Classification
        • Regression
          • Lending
          • Customer Lifetime Value
          • Click-Through Rate
        • Timeseries Forecasting
          • Demand Forecasting
          • Churn Forecasting
        • Ranking
          • Collaborative Filtering
          • Search Ranking
        • Natural Language Processing (NLP)
        • Common Industry Use Cases
      • Integrations: ML
        • Google BigQuery
          • GBQ Views
          • Google BigQuery FAQ
        • Snowflake
          • Snowflake Permissions Configuration
        • Databricks
        • Google Cloud Storage (GCS)
        • Azure Blob Storage
        • AWS S3
          • Private Image Link Access Via AWS S3
        • Kafka
        • Airflow Retrain
        • Amazon EventBridge Retrain
        • MLOps Partners
          • Algorithmia
          • Anyscale
          • Azure & Databricks
          • BentoML
          • CML (DVC)
          • Deepnote
          • Feast
          • Google Cloud ML
          • Hugging Face
          • LangChain 🦜🔗
          • MLflow
          • Neptune
          • Paperspace
          • PySpark
          • Ray Serve (Anyscale)
          • SageMaker
            • Batch
            • RealTime
            • Notebook Instance with Greater than 20GB of Data
          • Spell
          • UbiOps
          • Weights & Biases
      • API Reference: ML
        • Python SDK
          • Pandas Batch Logging
            • Client
            • log
            • Schema
            • TypedColumns
            • EmbeddingColumnNames
            • ObjectDetectionColumnNames
            • PromptTemplateColumnNames
            • LLMConfigColumnNames
            • LLMRunMetadataColumnNames
            • NLP_Metrics
            • AutoEmbeddings
            • utils.types.ModelTypes
            • utils.types.Metrics
            • utils.types.Environments
          • Single Record Logging
            • Client
            • log
            • TypedValue
            • Ranking
            • Multi-Class
            • Object Detection
            • Embedding
            • LLMRunMetadata
            • utils.types.ModelTypes
            • utils.types.Metrics
            • utils.types.Environments
        • Java SDK
          • Constructor
          • log
          • bulkLog
          • logValidationRecords
          • logTrainingRecords
        • R SDK
          • Client$new()
          • Client$log()
        • Rest API
    • Computer Vision
      • How to: CV
        • Generate Embeddings
          • How to Generate Your Own Embedding
          • Let Arize Generate Your Embeddings
        • Embedding & Cluster Analyzer
        • ✨Copilot: Embedding Summarization
        • Similarity Search
        • Embedding Drift
        • Embeddings FAQ
      • Integrations: CV
      • Use Cases: CV
        • Image Classification
        • Image Segmentation
        • Object Detection
      • API Reference: CV
Powered by GitBook

Support

  • Chat Us On Slack
  • support@arize.com

Get Started

  • Signup For Free
  • Book A Demo

Copyright © 2025 Arize AI, Inc

On this page

Was this helpful?

  1. Observe
  2. Tracing Integrations

LangChain

Last updated 1 month ago

Was this helpful?

Use our code block below to get started using our LangChainInstrumentor.

# Import open-telemetry dependencies
from arize.otel import register

# Setup OTel via our convenience function
tracer_provider = register(
    space_id = "your-space-id", # in app space settings page
    api_key = "your-api-key", # in app space settings page
    project_name = "your-project-name", # name this to whatever you would like
)

# Import the automatic instrumentor from OpenInference
from openinference.instrumentation.langchain import LangChainInstrumentor

# Finish automatic instrumentation
LangChainInstrumentor().instrument(tracer_provider=tracer_provider)

For more in-detail demonstration, check our Colab tutorial:

The LangChain instrumentation package via npm.

npm install @arizeai/openinference-instrumentation-langchain
Library
Instrumentation
Version

LangChain

@arizeai/openinference-instrumentation-langchain

The example below utilizes the OpenInference JavaScript LangChain example.

Navigate to the backend folder.

In addition to the above package, sending traces to Arize requires the following package: @opentelemetry/exporter-trace-otlp-grpc. This package can be installed in your environment by running the following command in your shell.

npm install @opentelemetry/exporter-trace-otlp-grpc @grpc/grpc-js

instrumentation.ts should be implemented as below (you'll need to install all of the packages imported below in the same manner as above):

/*instrumentation.ts */
import { LangChainInstrumentation } from "@arizeai/openinference-instrumentation-langchain";
import { ConsoleSpanExporter } from "@opentelemetry/sdk-trace-base";
import {
  NodeTracerProvider,
  SimpleSpanProcessor,
} from "@opentelemetry/sdk-trace-node";
import { resourceFromAttributes } from "@opentelemetry/resources";
import { OTLPTraceExporter as GrpcOTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-grpc"; // Arize specific
import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
import { Metadata } from "@grpc/grpc-js";
import * as CallbackManagerModule from "@langchain/core/callbacks/manager";

// For troubleshooting, set the log level to DiagLogLevel.DEBUG
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG);

// Arize specific - Create metadata and add your headers
const metadata = new Metadata();

// Your Arize Space and API Keys, which can be found in the UI
metadata.set('space_id', 'your-space-id');
metadata.set('api_key', 'your-api-key');

const processor = new SimpleSpanProcessor(new ConsoleSpanExporter());
const otlpProcessor = new SimpleSpanProcessor(
  new GrpcOTLPTraceExporter({
    url: "https://otlp.arize.com/v1",
    metadata,
  })
);

const provider = new NodeTracerProvider({
  resource: resourceFromAttributes({
    // Arize specific - The name of a new or preexisting model you 
    // want to export spans to
    "model_id": "your-model-id",
    "model_version": "your-model-version"
  }),
  spanProcessors: [processor, otlpProcessor]
});

const lcInstrumentation = new LangChainInstrumentation();
// LangChain must be manually instrumented as it doesn't have
// a traditional module structure
lcInstrumentation.manuallyInstrument(CallbackManagerModule);

provider.register();

If you simultaneously want to send spans to a Phoenix collector, you should also add the following code blocks, from the original instrumentation.ts files.

import { 
  OTLPTraceExporter as ProtoOTLPTraceExporter 
} from "@opentelemetry/exporter-trace-otlp-proto";

// create another SpanProcessor below the previous SpanProcessor
const phoenixProcessor = new SimpleSpanProcessor(
    new ProtoOTLPTraceExporter({
      // This is the url where your phoenix server is running
      url: "http://localhost:6006/v1/traces",
    }),
  );

// add the SpanProcessor to the provider with the other processors 
const provider = new NodeTracerProvider({
  resource: resourceFromAttributes({
    // Arize specific - The name of a new or preexisting model you 
    // want to export spans to
    "model_id": "your-model-id",
    "model_version": "your-model-version"
  }),
  spanProcessors: [processor, otlpProcessor, phoenixProcessor]
});

Follow the steps from the backend and frontend readme. Or simply run:

docker compose up --build

Arize supports native thread tracking with LangChain by enabling the use of session_id, thread_id, or conversation_id to group related calls. This flexibility allows for seamless tracking of multi-turn conversations, making it easier to monitor and analyze chatbot interactions for performance or debug issues using Arize's observability tools. Below is an example demonstrating how to set a thread_id in the metadata for a chat:

import { ChatOpenAI } from "@langchain/openai";

const chatModel = new ChatOpenAI({
  openAIApiKey: "my-api-key",
  modelName: "gpt-3.5-turbo",
});

async function run() {
  // First message invocation
  const response1 = await chatModel.invoke("Hello, how are you?", {
    metadata: {
      thread_id: "thread-456",
    },
  });

  // Second message invocation
  const response2 = await chatModel.invoke("What can you do?", {
    metadata: {
      thread_id: "thread-456",
    },
  });

  // Print thread_id and session_id from the responses
  console.log("Response 1 Metadata:");
  console.log(`Thread ID: thread-456`);
  console.log(`Session ID: ${response1.metadata?.session_id || "Not available"}`);

  console.log("Response 2 Metadata:");
  console.log(`Thread ID: thread-456`);
  console.log(`Session ID: ${response2.metadata?.session_id || "Not available"}`);
}

run().catch(console.error);

Once the repository is cloned, run the following command in your terminal to install the necessary dependencies:

npm install

After the installation of the dependencies in package.json is complete, you can execute the example by running:

node index.js

This will run the index.js file from the example repository in your local directory.

Arize has first-class support for applications. After instrumentation, you will have a full trace of every part of your LLM application, including input, embeddings, retrieval, functions, and output messages.

We follow a standardized format for how a trace data should be structured using , which is our open source package based on . The package we are using is , which is a lightweight convenience package to set up OpenTelemetry and send traces to Arize.

to build run the frontend, backend, and Phoenix all at the same time. Navigate to localhost:3000 to begin sending messages to the chatbot and check out your traces in Arize at or Phoenix at localhost:6006. Native Thread Tracking

For an executable example of native thread tracking, please clone the following repository into your local directory: .

🔭
LangChain
openinference
OpenTelemetry
arize-otel
app.arize.com
https://github.com/cephalization/langchain-instrumentation-example-arize
Google Colab
Tutorial on instrumenting a LangChain application and sending traces to Arize
Logo
tutorials/python/llm/tracing/langchain at main · Arize-ai/tutorialsGitHub
See here for more langchain tutorials
openinference/js/examples/langchain-express at main · Arize-ai/openinferenceGitHub
Logo
Logo