smolagents is a minimalist AI agent framework developed by Hugging Face, designed to simplify the creation and deployment of powerful agents with just a few lines of code. It focuses on simplicity and efficiency, making it easy for developers to leverage large language models (LLMs) for various applications.
Phoenix provides auto-instrumentation, allowing you to track and visualize every step and call made by your agent.
Launch Phoenix
We have several code samples below on different ways to integrate with smolagents, based on how you want to use Phoenix.
import osfrom phoenix.otel import register# Add Phoenix API Key for tracingPHOENIX_API_KEY ="ADD YOUR API KEY"os.environ["PHOENIX_CLIENT_HEADERS"]=f"api_key={PHOENIX_API_KEY}"os.environ["PHOENIX_COLLECTOR_ENDPOINT"]="https://app.phoenix.arize.com"# configure the Phoenix tracertracer_provider =register( project_name="my-llm-app", # Default is 'default')
Your Phoenix API key can be found on the Keys section of your dashboard.
Launch your local Phoenix instance:
pipinstallarize-phoenixphoenixserve
For details on customizing a local terminal deployment, see Terminal Setup.
Install packages:
pipinstallarize-phoenix-otel
Connect your application to your instance using:
from phoenix.otel import registertracer_provider =register( project_name="my-llm-app", # Default is 'default' endpoint="http://localhost:6006/v1/traces",)
from phoenix.otel import registertracer_provider =register( project_name="my-llm-app", # Default is 'default' endpoint="http://localhost:6006/v1/traces",)
For more info on using Phoenix with Docker, see Docker
Install packages:
pipinstallarize-phoenix
Launch Phoenix:
import phoenix as pxpx.launch_app()
Connect your notebook to Phoenix:
from phoenix.otel import registertracer_provider =register( project_name="my-llm-app", # Default is 'default')
By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See Persistence or use one of the other deployment options to retain traces.
Initialize the SmolagentsInstrumentor before your application code:
from openinference.instrumentation.smolagents import SmolagentsInstrumentorSmolagentsInstrumentor().instrument(tracer_provider=tracer_provider)
Create & Run an Agent
Create your Hugging Face Model, and at every run, traces will be sent to Phoenix.
from smolagents import ( CodeAgent, ToolCallingAgent, ManagedAgent, DuckDuckGoSearchTool, VisitWebpageTool, HfApiModel,)model =HfApiModel()agent =ToolCallingAgent( tools=[DuckDuckGoSearchTool(), VisitWebpageTool()], model=model,)managed_agent =ManagedAgent( agent=agent, name="managed_agent", description="This is an agent that can do web search.",)manager_agent =CodeAgent( tools=[], model=model, managed_agents=[managed_agent],)manager_agent.run("If the US keeps its 2024 growth rate, how many years will it take for the GDP to double?")
Observe
Now that you have tracing setup, all invocations and steps of your Agent will be streamed to your running Phoenix for observability and evaluation.