Last updated
Last updated
is a framework for automatically prompting and fine-tuning language models. It provides composable and declarative APIs that allow developers to describe the architecture of their LLM application in the form of a "module" (inspired by PyTorch's nn.Module
). It them compiles these modules using "teleprompters" that optimize the module for a particular task. The term "teleprompter" is meant to evoke "prompting at a distance," and could involve selecting few-shot examples, generating prompts, or fine-tuning language models.
Phoenix makes your DSPy applications observable by visualizing the underlying structure of each call to your compiled DSPy module.
Sign up for Phoenix:
Sign up for an Arize Phoenix account at
Install packages:
Connect your application to your cloud instance:
Your Phoenix API key can be found on the Keys section of your .
Initialize the DSPyInstrumentor before your application code.
DSPy uses LiteLLM under the hood to handle LLM calls. By also instrumenting LiteLLM, you'll be able to see token counts on your DSPy spans and traces.
Now run invoke your compiled DSPy module. Your traces should appear inside of Phoenix.
Now that you have tracing setup, all predictions will be streamed to your running Phoenix for observability and evaluation.
Pull latest Phoenix image from :
For more info on using Phoenix with Docker, see
By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See or use one of the other deployment options to retain traces.
Instrument and observe your DSPy application via the DSPyInstrumentor