How to use the python LlamaIndexInstrumentor to trace LlamaIndex
LlamaIndex is a data framework for your LLM application. It's a powerful framework by which you can build an application that leverages RAG (retrieval-augmented generation) to super-charge an LLM with your own data. RAG is an extremely powerful LLM application model because it lets you harness the power of LLMs such as OpenAI's GPT but tuned to your data and use-case.
For LlamaIndex, tracing instrumentation is added via an OpenTelemetry instrumentor aptly named the LlamaIndexInstrumentor . This callback is what is used to create spans and send them to the Phoenix collector.
We recommend using llama_index >= 0.11.0
Launch Phoenix
Phoenix supports LlamaIndex's latest instrumentation paradigm. This paradigm requires LlamaIndex >= 0.10.43. For legacy support, see below.
Phoenix Developer Edition is another name for LlamaTrace
Install packages:
pipinstallarize-phoenix-otel
Connect your application to your cloud instance:
import osfrom phoenix.otel import register# Add Phoenix API Key for tracingPHOENIX_API_KEY ="ADD YOUR API KEY"os.environ["PHOENIX_CLIENT_HEADERS"]=f"api_key={PHOENIX_API_KEY}"# configure the Phoenix tracertracer_provider =register( project_name="my-llm-app", # Default is 'default' endpoint="https://app.phoenix.arize.com/v1/traces",)
Your Phoenix API key can be found on the Keys section of your dashboard.
Launch your local Phoenix instance:
pipinstallarize-phoenixphoenixserve
For details on customizing a local terminal deployment, see Terminal Setup.
Install packages:
pipinstallarize-phoenix-otel
Connect your application to your instance using:
from phoenix.otel import registertracer_provider =register( project_name="my-llm-app", # Default is 'default' endpoint="http://localhost:6006/v1/traces",)
from phoenix.otel import registertracer_provider =register( project_name="my-llm-app", # Default is 'default' endpoint="http://localhost:6006/v1/traces",)
For more info on using Phoenix with Docker, see Docker
Install packages:
pipinstallarize-phoenix
Launch Phoenix:
import phoenix as pxpx.launch_app()
Connect your notebook to Phoenix:
from phoenix.otel import registertracer_provider =register( project_name="my-llm-app", # Default is 'default')
By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See Persistence or use one of the other deployment options to retain traces.
Initialize the LlamaIndexInstrumentor before your application code.
from openinference.instrumentation.llama_index import LlamaIndexInstrumentorLlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)
Run LlamaIndex
You can now use LlamaIndex as normal, and tracing will be automatically captured and sent to your Phoenix instance.
from llama_index.core import VectorStoreIndex, SimpleDirectoryReaderimport osos.environ["OPENAI_API_KEY"]="YOUR OPENAI API KEY"documents =SimpleDirectoryReader("data").load_data()index = VectorStoreIndex.from_documents(documents)query_engine = index.as_query_engine()response = query_engine.query("Some question about the data should go here")print(response)
# Phoenix can display in real time the traces automatically# collected from your LlamaIndex application.import phoenix as px# Look for a URL in the output to open the App in a browser.px.launch_app()# The App is initially empty, but as you proceed with the steps below,# traces will appear automatically as your LlamaIndex application runs.from llama_index.core import set_global_handlerset_global_handler("arize_phoenix")# Run all of your LlamaIndex applications as usual and traces# will be collected and displayed in Phoenix.
Legacy (<0.10.0)
If you are using an older version of llamaIndex (pre-0.10), you can still use phoenix. You will have to be using arize-phoenix>3.0.0 and downgrade openinference-instrumentation-llama-index<1.0.0
# Phoenix can display in real time the traces automatically# collected from your LlamaIndex application.import phoenix as px# Look for a URL in the output to open the App in a browser.px.launch_app()# The App is initially empty, but as you proceed with the steps below,# traces will appear automatically as your LlamaIndex application runs.import llama_indexllama_index.set_global_handler("arize_phoenix")# Run all of your LlamaIndex applications as usual and traces# will be collected and displayed in Phoenix.