Towards Lighter and Robust Evaluation for Retrieval Augmented Generation

Published: 05 Mar 2025, Last Modified: 20 Mar 2025QUESTION PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: RAG, LLM, Retrieval Augmented Generation, Large Language Models, Faithfulness, Evaluation, Correctness, Hallucination
TL;DR: This paper proposes a lighter approach of evaluating the faithfulness and correctness of the RAG answers by breaking the task into multiple stages
Abstract: Large Language Models are prompting us to view more NLP tasks from a generative perspective. At the same time, they offer a new way of accessing information, mainly through the RAG framework. While there have been notable improvements for the autoregressive models, overcoming hallucination in the generated answers remains a continuous problem. A standard solution is to use commercial LLMs, such as GPT4, to evaluate these algorithms. However, such frameworks are expensive and not very transparent. Therefore, we propose a study which demonstrates the interest of open-weight models for evaluating RAG hallucination. We develop a lightweight approach using smaller, quantized LLMs to provide an accessible and interpretable metric that gives continuous scores for the generated answer with respect to their correctness and faithfulness. This score allows us to question decisions' reliability and explore thresholds to develop a new AUC metric as an alternative to correlation with human judgment.
Submission Number: 24
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview