LiteLLM
Last updated
Was this helpful?
Last updated
Was this helpful?
allows developers to call all LLM APIs using the openAI format. is a proxy server to call 100+ LLMs in OpenAI format. Both are supported by this auto-instrumentation.
Any calls made to the following functions will be automatically captured by this integration:
completion()
acompletion()
completion_with_retries()
embedding()
aembedding()
image_generation()
aimage_generation()
Sign up for Phoenix:
Sign up for an Arize Phoenix account at
Install packages:
Set your Phoenix endpoint and API Key:
Your Phoenix API key can be found on the Keys section of your .
Use the register function to connect your application to Phoenix:
Add any API keys needed by the models you are using with LiteLLM.
You can now use LiteLLM as normal and calls will be traces in Phoenix.
Traces should now be visible in Phoenix!
Pull latest Phoenix image from :
For more info on using Phoenix with Docker, see .
By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See or use one of the other deployment options to retain traces.