Online Tasks API

Creating an Evaluation Task

You can programmatically create an evaluation task using the createEvalTask mutation. This allows you to set up evaluations based on your specific criteria for LLM applications.

mutation CreateOnlineEvalTask {
  createEvalTask(input: {
    modelId: "MODEL_ID"            # Your model's ID
    samplingRate: 100              # Desired sampling rate for task
    queryFilter: "attributes.openinference.span.kind = 'AGENT'"  # Filter to define the task scope
    name: "test"                   # Name of the evaluation task
    templateEvaluators: {
      name: "test evaluator",       # Column name for the evaluator
      rails: ["incorrect","correct"], # Output labels for evaluation
      template: "template",         # Template for the evaluation
      position: 1,                  
      includeExplanations: true,    # Include explanations in the results
      useFunctionCallingIfAvailable: false  
    }
    runContinuously: true           
    llmConfig: {
      modelName: GPT_4o,            # LLM Model name
      provider: openAI,             # LLM provider
      temperature: 0                # Temperature for generation
    }
  }) {
    evalTask {
      name
      samplingRate 
      queryFilter
    }
  }
}

Key Parameters:

  • modelId: The unique identifier for your project.

  • samplingRate: Set the percentage of inputs the task will sample for evaluation.

  • queryFilter: Define conditions to filter which inputs are included in the evaluation.

  • name: The name of the evaluation task.

  • templateEvaluators: Settings for how the evaluation will be performed (e.g., evaluator name, template, rails, explanations).

  • llmConfig: Specify the LLM configuration, such as model provider and temperature settings.


Updating an Evaluation Task

To update an existing evaluation task, use the patchEvalTask mutation. You can choose to modify any parameters you wish to update without needing to include parameters you want to leave unchanged.

mutation UpdateOnlineEvalTask {
  patchEvalTask(input: {
    onlineTaskId: "TASK_ID"         # The unique identifier for your task
    samplingRate: 100               # Updated sampling rate
    queryFilter: "attributes.openinference.span.kind = 'LLM'"  # Updated filter
    name: "test"                    # Updated task name
  }) {
    evalTask {
      name
      samplingRate 
      filters {
        dimension {
          id
          name
        }
        operator
      }
      queryFilter
    }
  }
}

Running Task

To programmatically run a task on historical data, use the runOnlineTask mutation. This allows you to run your task on specific time range.

mutation RunOnlineTask {
  runOnlineTask(
    input: {
      onlineTaskId: "TASK_ID",                   # The unique identifier for your task
      dataStartTime: "2024-10-08T00:01:41.644Z", # start time for your data            
      dataEndTime: "2024-12-07T01:01:41.644Z",   # end time of your data
      maxSpans: 100                              # the maximum number of spans
  ) {
    result {
      ... on CreateTaskRunResponse {
        runId
      }
      ... on TaskError {
        message
        code
      }
    }
  }
}

Key Input Parameters:

  • onlineTaskId: The unique identifier for your task

  • dataStartTime: A date-time string at UTC (such as 2007-12-03T10:15:30Z) for the start of your historical data.

  • dataEndTime: The end time for your historical data

  • maxSpans: The maximum number of spans you would like to evaluate. Defaults to 10,000 if not specified.

Response Types:

The response to this mutation is a union type, meaning it can return one of two types:

  • TaskError: Indicates a failure to start the task run. It includes an error message explaining the reason for the failure.

  • CreateTaskRunResponse: Indicates a successful task run start. It contains the run ID of the newly created

Last updated

Copyright © 2023 Arize AI, Inc