Online Tasks API

Creating an Evaluation Task

You can programmatically create an evaluation task using the CreateEvalTask mutation. This allows you to set up evaluations based on your specific criteria for LLM applications.

mutation CreateOnlineEvalTask {
  createEvalTask(input: {
    modelId: "MODEL_ID"            # Your model's ID
    samplingRate: 100              # Desired sampling rate for task
    queryFilter: "attributes.openinference.span.kind = 'AGENT'"  # Filter to define the task scope
    name: "test"                   # Name of the evaluation task
    templateEvaluators: {
      name: "test evaluator",       # Column name for the evaluator
      rails: ["incorrect","correct"], # Output labels for evaluation
      template: "template",         # Template for the evaluation
      position: 1,                  
      includeExplanations: true,    # Include explanations in the results
      useFunctionCallingIfAvailable: false  
    }
    runContinuously: true           
    llmConfig: {
      modelName: GPT_4o,            # LLM Model name
      provider: openAI,             # LLM provider
      temperature: 0                # Temperature for generation
    }
  }) {
    evalTask {
      name
      samplingRate 
      queryFilter
    }
  }
}

Key Parameters:

  • modelId: The unique identifier for your project.

  • samplingRate: Set the percentage of inputs the task will sample for evaluation.

  • queryFilter: Define conditions to filter which inputs are included in the evaluation.

  • name: The name of the evaluation task.

  • templateEvaluators: Settings for how the evaluation will be performed (e.g., evaluator name, template, rails, explanations).

  • llmConfig: Specify the LLM configuration, such as model provider and temperature settings.


Updating an Evaluation Task

To update an existing evaluation task, use the UpdateOnlineEvalTask mutation. You can choose to modify any parameters you wish to update without needing to include parameters you want to leave unchanged.

mutation UpdateOnlineEvalTask {
  patchEvalTask(input: {
    onlineTaskId: "TASK_ID"         # The unique identifier for your task
    samplingRate: 100               # Updated sampling rate
    queryFilter: "attributes.openinference.span.kind = 'LLM'"  # Updated filter
    name: "test"                    # Updated task name
  }) {
    evalTask {
      name
      samplingRate 
      filters {
        dimension {
          id
          name
        }
        operator
      }
      queryFilter
    }
  }
}

Last updated

Copyright © 2023 Arize AI, Inc