AutoGen is an agent framework from Microsoft that allows for complex Agent creation. It is unique in its ability to create multiple agents that work together.
The AutoGen Agent framework allows creation of multiple agents and connection of those agents to work together to accomplish tasks.
import osfrom phoenix.otel import register# Add Phoenix API Key for tracingPHOENIX_API_KEY ="ADD YOUR API KEY"os.environ["PHOENIX_CLIENT_HEADERS"]=f"api_key={PHOENIX_API_KEY}"# configure the Phoenix tracertracer_provider =register( project_name="my-llm-app", # Default is 'default' endpoint="https://app.phoenix.arize.com/v1/traces",)
Your Phoenix API key can be found on the Keys section of your dashboard.
Launch your local Phoenix instance:
pipinstallarize-phoenixphoenixserve
For details on customizing a local terminal deployment, see Terminal Setup.
Install packages:
pipinstallarize-phoenix-otel
Connect your application to your instance using:
from phoenix.otel import registertracer_provider =register( project_name="my-llm-app", # Default is 'default' endpoint="http://localhost:6006/v1/traces",)
from phoenix.otel import registertracer_provider =register( project_name="my-llm-app", # Default is 'default' endpoint="http://localhost:6006/v1/traces",)
For more info on using Phoenix with Docker, see Docker
Install packages:
pipinstallarize-phoenix
Launch Phoenix:
import phoenix as pxpx.launch_app()
Connect your notebook to Phoenix:
from phoenix.otel import registertracer_provider =register( project_name="my-llm-app", # Default is 'default')
By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See Persistence or use one of the other deployment options to retain traces.
Install
Phoenix instruments Autogen by instrumenting the underlying model library it's using. If your agents are set up to call OpenAI, use our OpenAI instrumentor per the example below.
If your agents are using a different model, be sure to instrument that model instead.
from openinference.instrumentation.openai import OpenAIInstrumentorOpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
Run Autogen
From here you can use Autogen as normal, and Phoenix will automatically trace any model calls made.
Observe
The Phoenix support is simple in its first incarnation but allows for capturing all of the prompt and responses that occur under the framework between each agent.
The individual prompt and responses are captured directly through OpenAI calls. If you're using a different underlying model provider than OpenAI, instrument your application using the respective instrumentor instead.