Custom Evaluators

Learn how to create custom LLM evaluators

In addition to default evaluators (provided and maintained by the Context.ai platform) you can also create custom evaluators to assess the results of generated responses.

To create a custom evaluator, start by clicking the "Create New Evaluator" button from the Evaluators page.

From this page, you can specify custom instructions for an LLM to evaluate whether a given generated response meets a given criteria.

Evaluation Model

Currently two models are supported for evaluators, GPT-3.5-Turbo and GPT-4-Turbo.

Include Context

The included context parameter allows you to determine which sections of the input are passed to the LLM during an evaluation. If you select Model Response Only then only the generated response will be considered by the LLM. Selecting Full Context will allow the LLM to consider both the generated context and the prior input context when evaluating.

Last updated