Last updated
Copyright © 2023 Arize AI, Inc
Last updated
You can programmatically create an evaluation task using the createEvalTask
mutation. This allows you to set up evaluations based on your specific criteria for LLM applications.
modelId
: The unique identifier for your project.
samplingRate
: Set the percentage of inputs the task will sample for evaluation.
queryFilter
: Define conditions to filter which inputs are included in the evaluation.
name
: The name of the evaluation task.
templateEvaluators
: Settings for how the evaluation will be performed (e.g., evaluator name, template, rails, explanations).
llmConfig
: Specify the LLM configuration, such as model provider and temperature settings.
To update an existing evaluation task, use the patchEvalTask
mutation. You can choose to modify any parameters you wish to update without needing to include parameters you want to leave unchanged.
To programmatically run a task on historical data, use the runOnlineTask
mutation. This allows you to run your task on specific time range.
onlineTaskId
: The unique identifier for your task
dataStartTime
: A date-time string at UTC (such as 2007-12-03T10:15:30Z) for the start of your historical data.
dataEndTime
: The end time for your historical data
maxSpans
: The maximum number of spans you would like to evaluate. Defaults to 10,000 if not specified.
The response to this mutation is a union type, meaning it can return one of two types:
TaskError
: Indicates a failure to start the task run. It includes an error message explaining the reason for the failure.
CreateTaskRunResponse
: Indicates a successful task run start. It contains the run ID of the newly created