Run the following commands below to install our open source tracing packages, which works on top of OpenTelemetry. This example below uses openai, and we support many LLM providers (see full list).
Go to your space settings in the left navigation, and you will see your API keys on the right hand side. You'll need the space id and API keys for the next part.
Are you coding with Javascript instead of Python? See our detailed guide on auto-instrumentation or manual instrumentation with Javascript examples.
The following code snippet showcases how to automatically instrument your OpenAI application.
# Import open-telemetry dependenciesfrom arize.otel import register# Setup OTel via our convenience functiontracer_provider =register( space_id ="your-space-id", # in app space settings page api_key ="your-api-key", # in app space settings page project_name ="your-project-name", # name this to whatever you would like)# Import the automatic instrumentor from OpenInferencefrom openinference.instrumentation.openai import OpenAIInstrumentor# Finish automatic instrumentationOpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
Set OpenAI Key:
import osfrom getpass import getpassos.environ["OPENAI_API_KEY"]=getpass("OpenAI API key")
Now start asking questions to your LLM app and watch the traces being collected by Arize. You can also follow our colab guide here.
The following code snippet implements instrumentation for an OpenAI client in typescript.
/*instrumentation.ts */import { registerInstrumentations } from"@opentelemetry/instrumentation";import { OpenAIInstrumentation} from"@arizeai/openinference-instrumentation-openai";import { ConsoleSpanExporter } from"@opentelemetry/sdk-trace-base";import { NodeTracerProvider, BatchSpanProcessor,} from"@opentelemetry/sdk-trace-node";import { Resource } from"@opentelemetry/resources";import { OTLPTraceExporter as GrpcOTLPTraceExporter } from"@opentelemetry/exporter-trace-otlp-grpc"; // Arize specificimport { diag, DiagConsoleLogger, DiagLogLevel } from"@opentelemetry/api";import { Metadata } from"@grpc/grpc-js"// For troubleshooting, set the log level to DiagLogLevel.DEBUGdiag.setLogger(newDiagConsoleLogger(),DiagLogLevel.DEBUG);// Arize specific - Create metadata and add your headersconstmetadata=newMetadata();// Your Arize Space and API Keys, which can be found in the UImetadata.set('space_id','your-space-id');metadata.set('api_key','your-api-key');constprovider=newNodeTracerProvider({ resource:newResource({// Arize specific - The name of a new or preexisting model you // want to export spans to"model_id":"your-model-id","model_version":"your-model-version" }),});provider.addSpanProcessor(newBatchSpanProcessor(newConsoleSpanExporter()));provider.addSpanProcessor(newBatchSpanProcessor(newGrpcOTLPTraceExporter({ url:"https://otlp.arize.com/v1", metadata, }), ),);registerInstrumentations({ instrumentations: [newOpenAIInstrumentation({})],});provider.register();
You can also follow our example application at the OpenInference github.
The following code snippet showcases how to automatically instrument your LLM application.
import os# Import open-telemetry dependenciesfrom arize.otel import register# Setup OTel via our convenience functiontracer_provider =register( space_id ="your-space-id", # in app space settings page api_key ="your-api-key", # in app space settings page project_name ="your-project-name", # name this to whatever you would like)# Import the automatic instrumentor from OpenInferencefrom openinference.instrumentation.llama_index import LlamaIndexInstrumentor# Finish automatic instrumentationLlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)
To test, you can create a simple RAG application using LlamaIndex.
from gcsfs import GCSFileSystemfrom llama_index.core import ( Settings, StorageContext, load_index_from_storage,)from llama_index.embeddings.openai import OpenAIEmbeddingfrom llama_index.llms.openai import OpenAIfile_system =GCSFileSystem(project="public-assets-275721")index_path ="arize-phoenix-assets/datasets/unstructured/llm/llama-index/arize-docs/index/"storage_context = StorageContext.from_defaults( fs=file_system, persist_dir=index_path,)Settings.llm =OpenAI(model="gpt-4-turbo-preview")Settings.embed_model =OpenAIEmbedding(model="text-embedding-ada-002")index =load_index_from_storage( storage_context,)query_engine = index.as_query_engine()response = query_engine.query("What is Arize and how can it help me as an AI Engineer?")
Now start asking questions to your LLM app and watch the traces being collected by Arize.🦙
The following code snippet showcases how to automatically instrument your LLM application.
import os# Import open-telemetry dependenciesfrom arize.otel import register# Setup OTel via our convenience functiontracer_provider =register( space_id ="your-space-id", # in app space settings page api_key ="your-api-key", # in app space settings page project_name ="your-project-name", # name this to whatever you would like)# Import the automatic instrumentor from OpenInferencefrom openinference.instrumentation.langchain import LangChainInstrumentor# Finish automatic instrumentationLangChainInstrumentor().instrument(tracer_provider=tracer_provider)
To test, you can create a simple RAG application using Langchain.
from langchain.chains import RetrievalQAfrom langchain.retrievers import KNNRetrieverfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsknn_retriever =KNNRetriever( index=np.stack(df["text_vector"]), texts=df["text"].tolist(), embeddings=OpenAIEmbeddings(),)chain_type ="stuff"# stuff, refine, map_reduce, and map_rerankchat_model_name ="gpt-3.5-turbo"llm =ChatOpenAI(model_name=chat_model_name)chain = RetrievalQA.from_chain_type( llm=llm, chain_type=chain_type, retriever=knn_retriever, metadata={"application_type": "question_answering"},)response = chain.invoke("What is Arize and how can it help me as an AI Engineer?")
Now start asking questions to your LLM app and watch the traces being collected by Arize.
In this example we will instrument an LLM application built using Groq
Set up GroqInstrumentor to trace calls to Groq LLM in the application and sends the traces to an Arize model endpoint as defined below.
from openinference.instrumentation.groq import GroqInstrumentor# Import open-telemetry dependenciesfrom arize.otel import register# Setup OTel via our convenience functiontracer_provider =register( space_id ="your-space-id", # in app space settings page api_key ="your-api-key", # in app space settings page project_name ="your-project-name", # name this to whatever you would like)GroqInstrumentor().instrument(tracer_provider=tracer_provider)
Run a simple Chat Completion via Groq and see it instrumented
import osfrom groq import Groq# get your groq api key by visiting https://groq.com/os.environ["GROQ_API_KEY"]="your-groq-api-key"client =Groq()# send a request to the groq clientchat_completion = client.chat.completions.create( messages=[ {"role": "user","content": "Explain the importance of low latency LLMs", } ], model="mixtral-8x7b-32768",)print(chat_completion.choices[0].message.content)
Run your LLM application
Once you've executed a sufficient number of queries (or chats) to your application, you can view the details on the LLM Tracing page.
To continue with this guide, go to the quickstart: evaluation to add evaluation labels to your traces!
Next steps
Dive deeper into the following topics to keep improving your LLM application!