Tracing
Tracing the execution of LLM applications in Arize
Last updated
Tracing the execution of LLM applications in Arize
Last updated
Copyright © 2023 Arize AI, Inc
Tracing is a powerful tool for understanding how your LLM application works. Arize is not tied to any LLM vendor or framework, and allows you to trace all kinds of large language models and frameworks.
To get started with code, check out the Quickstart guide for LLM tracing and evaluation.
Learn more about tracing concepts by reading our articles on What are Traces? and How does Tracing Work?
Tracing can help you track down issues like:
Application latency - highlighting slow invocations of LLMs, Retrievers, etc.
Token Usage - Displays the breakdown of token usage with LLMs to surface up your most expensive LLM calls
Runtime Exceptions - Critical runtime exceptions such as rate-limiting are captured as exception events.
Retrieved Documents - view all the documents retrieved during a retriever call and the score and order in which they were returned
Embeddings - view the embedding text used for retrieval and the underlying embedding model
LLM Parameters - view the parameters used when calling out to an LLM to debug things like temperature and the system prompts
Prompt Templates - Figure out what prompt template is used during the prompting step and what variables were used.
Tool Descriptions - view the description and function signature of the tools your LLM has been given access to
LLM Function Calls - if using OpenAI or other a model with function calls, you can view the function selection and function messages in the input messages to the LLM.