Eval4RAG: Workshop on Evaluation of Retrieval-Augmented Generation Systems

Published: 01 Jan 2025, Last Modified: 13 May 2025ECIR (5) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: As generative models increase the number of parameters, constantly fine-tuning them to incorporate new information into the generated output is cost-prohibitive. A popular approach for incorporating external knowledge into the model response is Retrieval-Augmented Generation (RAG). A number of evaluation campaigns, shared tasks, and collections have attempted to benchmark this new style of system combination, which has led to a diverse set of tasks, systems, and evaluation approaches. In this workshop, we aim to provide a platform for the discussion of the common and task-specific characteristics of evaluating RAG systems, with the goal of eventually creating a heterogeneous testing suite for RAG evaluation. The workshop is intended to be an in-person event with some possibility of online presentations. The workshop website is: https://eval4rag.github.io/.
Loading