import os
# Add Phoenix API Key for tracing
PHOENIX_API_KEY = "ADD YOUR API KEY"
os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={PHOENIX_API_KEY}"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "https://app.phoenix.arize.com"
Your Phoenix API key can be found on the Keys section of your dashboard.
Launch your local Phoenix instance:
pip install arize-phoenix
phoenix serve
For details on customizing a local terminal deployment, see Terminal Setup.
Install packages:
pip install arize-phoenix-otel
Set your Phoenix endpoint:
import os
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:6006"
docker run -p 6006:6006 arizephoenix/phoenix:latest
This will expose the Phoenix on localhost:6006
Install packages:
pip install arize-phoenix-otel
Set your Phoenix endpoint:
import os
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:6006"
For more info on using Phoenix with Docker, see Docker.
Install packages:
pip install arize-phoenix
Launch Phoenix:
import phoenix as px
px.launch_app()
By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See self-hosting or use one of the other deployment options to retain traces.
Set the MISTRAL_API_KEY environment variable to authenticate calls made using the SDK.
export MISTRAL_API_KEY=[your_key_here]
Connect to your Phoenix instance using the register function.
from phoenix.otel import register
# configure the Phoenix tracer
tracer_provider = register(
project_name="my-llm-app", # Default is 'default'
auto_instrument=True # Auto-instrument your app based on installed OI dependencies
)
Run Mistral
import os
from mistralai import Mistral
from mistralai.models import UserMessage
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-tiny"
client = Mistral(api_key=api_key)
chat_response = client.chat.complete(
model=model,
messages=[UserMessage(content="What is the best French cheese?")],
)
print(chat_response.choices[0].message.content)
Observe
Now that you have tracing setup, all invocations of Mistral (completions, chat completions, embeddings) will be streamed to your running Phoenix for observability and evaluation.