Groq provides low latency and lightning-fast inference for AI models. Arize supports instrumenting Groq API calls, including role types such as system, user, and assistant messages, as well as tool use. You can create a free GroqCloud account and generate a Groq API Key here to get started.
In this example we will instrument an LLM application built using Groq
Set up GroqInstrumentor to trace calls to Groq LLM in the application and sends the traces to an Arize model endpoint as defined below.
from openinference.instrumentation.groq import GroqInstrumentor# Import open-telemetry dependenciesfrom arize.otel import register# Setup OTel via our convenience functiontracer_provider =register( space_id ="your-space-id", # in app space settings page api_key ="your-api-key", # in app space settings page project_name ="your-project-name", # name this to whatever you would like)GroqInstrumentor().instrument(tracer_provider=tracer_provider)
Run a simple Chat Completion via Groq and see it instrumented
import osfrom groq import Groq# get your groq api key by visiting https://groq.com/os.environ["GROQ_API_KEY"]="your-groq-api-key"client =Groq()# send a request to the groq clientchat_completion = client.chat.completions.create( messages=[ {"role": "user","content": "Explain the importance of low latency LLMs", } ], model="mixtral-8x7b-32768",)print(chat_completion.choices[0].message.content)