ELOQUENT CLEF Shared Tasks for Evaluation of Generative Language Model Quality

Published: 01 Jan 2024, Last Modified: 01 Oct 2024ECIR (5) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: ELOQUENT is a set of shared tasks for evaluating the quality and usefulness of generative language models. ELOQUENT aims to bring together some high-level quality criteria, grounded in experiences from deploying models in real-life tasks, and to formulate tests for those criteria, preferably implemented to require minimal human assessment effort and in a multilingual setting. The selected tasks for this first year of ELOQUENT are (1) probing a language model for topical competence; (2) assessing the ability of models to generate and detect hallucinations; (3) assessing the robustness of a model output given variation in the input prompts; and (4) establishing the possibility to distinguish human-generated text from machine-generated text.
Loading