Langchain
Last updated
Last updated
Copyright © 2023 Arize AI, Inc
Arize has first-class support for LangChain applications. After instrumentation, you will have a full trace of every part of your LLM application, including input, embeddings, retrieval, functions, and output messages.
We follow a standardized format for how a trace data should be structured using openinference, which is our open source package based on OpenTelemetry.
Use our code block below to get started using our LangChainInstrumentor.
# Import open-telemetry dependencies
from arize.otel import register
# Setup OTel via our convenience function
tracer_provider = register(
space_id = "your-space-id", # in app space settings page
api_key = "your-api-key", # in app space settings page
project_name = "your-project-name", # name this to whatever you would like
)
# Import the automatic instrumentor from OpenInference
from openinference.instrumentation.langchain import LangChainInstrumentor
# Finish automatic instrumentation
LangChainInstrumentor().instrument(tracer_provider=tracer_provider)
For more in-detail demonstration, check our Colab tutorial:
The LangChain instrumentation package via npm.
npm install @arizeai/openinference-instrumentation-langchain
LangChain
@arizeai/openinference-instrumentation-langchain
The example below utilizes the OpenInference JavaScript LangChain example.
Navigate to the backend
folder.
In addition to the above package, sending traces to Arize requires the following package: @opentelemetry/exporter-trace-otlp-grpc
. This package can be installed in your environment by running the following command in your shell.
npm install @opentelemetry/exporter-trace-otlp-grpc @grpc/grpc-js
instrumentation.ts
should be implemented as below (you'll need to install all of the packages imported below in the same manner as above):
/*instrumentation.ts */
import { LangChainInstrumentation } from "@arizeai/openinference-instrumentation-langchain";
import { ConsoleSpanExporter } from "@opentelemetry/sdk-trace-base";
import {
NodeTracerProvider,
SimpleSpanProcessor,
} from "@opentelemetry/sdk-trace-node";
import { Resource } from "@opentelemetry/resources";
import { OTLPTraceExporter as GrpcOTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-grpc"; // Arize specific
import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
import { Metadata } from "@grpc/grpc-js";
import * as CallbackManagerModule from "@langchain/core/callbacks/manager";
// For troubleshooting, set the log level to DiagLogLevel.DEBUG
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG);
// Arize specific - Create metadata and add your headers
const metadata = new Metadata();
// Your Arize Space and API Keys, which can be found in the UI
metadata.set('space_id', 'your-space-id');
metadata.set('api_key', 'your-api-key');
const provider = new NodeTracerProvider({
resource: new Resource({
// Arize specific - The name of a new or preexisting model you
// want to export spans to
"model_id": "your-model-id",
"model_version": "your-model-version"
}),
});
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
provider.addSpanProcessor(
new SimpleSpanProcessor(
new GrpcOTLPTraceExporter({
url: "https://otlp.arize.com/v1",
metadata,
}),
),
);
const lcInstrumentation = new LangChainInstrumentation();
// LangChain must be manually instrumented as it doesn't have
// a traditional module structure
lcInstrumentation.manuallyInstrument(CallbackManagerModule);
provider.register();
If you simultaneously want to send spans to a Phoenix collector, you should also add the following code blocks, from the original instrumentation.ts
files.
import {
OTLPTraceExporter as ProtoOTLPTraceExporter
} from "@opentelemetry/exporter-trace-otlp-proto";
// add as another SpanProcessor below the previous SpanProcessor
provider.addSpanProcessor(
new SimpleSpanProcessor(
new ProtoOTLPTraceExporter({
// This is the url where your phoenix server is running
url: "http://localhost:6006/v1/traces",
}),
),
);
Follow the steps from the backend
and frontend
readme. Or simply run:
docker compose up --build
to build run the frontend, backend, and Phoenix all at the same time. Navigate to localhost:3000 to begin sending messages to the chatbot and check out your traces in Arize at app.arize.com or Phoenix at localhost:6006. Native Thread Tracking
Arize supports native thread tracking with LangChain by enabling the use of session_id
, thread_id
, or conversation_id
to group related calls. This flexibility allows for seamless tracking of multi-turn conversations, making it easier to monitor and analyze chatbot interactions for performance or debug issues using Arize's observability tools. Below is an example demonstrating how to set a thread_id
in the metadata for a chat:
import { ChatOpenAI } from "@langchain/openai";
const chatModel = new ChatOpenAI({
openAIApiKey: "my-api-key",
modelName: "gpt-3.5-turbo",
});
async function run() {
// First message invocation
const response1 = await chatModel.invoke("Hello, how are you?", {
metadata: {
thread_id: "thread-456",
},
});
// Second message invocation
const response2 = await chatModel.invoke("What can you do?", {
metadata: {
thread_id: "thread-456",
},
});
// Print thread_id and session_id from the responses
console.log("Response 1 Metadata:");
console.log(`Thread ID: thread-456`);
console.log(`Session ID: ${response1.metadata?.session_id || "Not available"}`);
console.log("Response 2 Metadata:");
console.log(`Thread ID: thread-456`);
console.log(`Session ID: ${response2.metadata?.session_id || "Not available"}`);
}
run().catch(console.error);
For an executable example of native thread tracking, please clone the following repository into your local directory: https://github.com/cephalization/langchain-instrumentation-example-arize.
Once the repository is cloned, run the following command in your terminal to install the necessary dependencies:
npm install
After the installation of the dependencies in package.json
is complete, you can execute the example by running:
node index.js
This will run the index.js
file from the example repository in your local directory.