LLMRunMetadata
Ingest metadata about your LLM inferences
Last updated
Copyright © 2023 Arize AI, Inc
Ingest metadata about your LLM inferences
Last updated
total_token_count
int
The total number of tokens used in the inference, both in the prompt sent to the LLM and in its response
promt_token_count
int
The number of tokens used in the prompt sent to the LLM
response_token_count
int
The number of tokens used in the response returned by the LLM
response_latency_ms
int or float
The latency (in ms) experienced during the LLM run