Last updated
Last updated
is a data framework for your LLM application. It's a powerful framework by which you can build an application that leverages RAG (retrieval-augmented generation) to super-charge an LLM with your own data. RAG is an extremely powerful LLM application model because it lets you harness the power of LLMs such as OpenAI's GPT but tuned to your data and use-case.
For LlamaIndex, tracing instrumentation is added via an OpenTelemetry instrumentor aptly named the LlamaIndexInstrumentor
. This callback is what is used to create spans and send them to the Phoenix collector.
We recommend using llama_index >= 0.11.0
Sign up for Phoenix:
Phoenix Developer Edition is another name for LlamaTrace
Install packages:
Connect your application to your cloud instance:
Initialize the LlamaIndexInstrumentor before your application code.
You can now use LlamaIndex as normal, and tracing will be automatically captured and sent to your Phoenix instance.
View your traces in Phoenix:
Phoenix supports LlamaIndex's latest paradigm. This paradigm requires LlamaIndex >= 0.10.43. For legacy support, see below.
Sign up for an Arize Phoenix account at or
Your Phoenix API key can be found on the Keys section of your .
For details on customizing a local terminal deployment, see .
See for more details
Pull latest Phoenix image from :
For more info on using Phoenix with Docker, see
By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See or use one of the other deployment options to retain traces.
How to use the python LlamaIndexInstrumentor to trace LlamaIndex