Adapting Standard Retrieval Benchmarks to Evaluate Generated Answers

Published: 01 Jan 2024, Last Modified: 25 Apr 2024ECIR (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large language models can now directly generate answers to many factual questions without referencing external sources. Unfortunately, relatively little attention has been paid to methods for evaluating the quality and correctness of these answers, for comparing the performance of one model to another, or for comparing one prompt to another. In addition, the quality of generated answers are rarely directly compared to the quality of retrieved answers. As models evolve and prompts are modified, we have no systematic way to measure improvements without resorting to expensive human judgments. To address this problem we adapt standard retrieval benchmarks to evaluate answers generated by large language models. Inspired by the BERTScore metric for summarization, we explore two approaches. In the first, we base our evaluation on the benchmark relevance judgments. We empirically run experiments on how information retrieval relevance judgments can be utilized as an anchor to evaluating the generated answers. In the second, we compare generated answers to the top results retrieved by a diverse set of retrieval models, ranging from traditional approaches to advanced methods, allowing us to measure improvements without human judgments. In both cases, we measure the similarity between an embedded representation of the generated answer and an embedded representation of a known, or assumed, relevant passage from the retrieval benchmark. In our experiments, we evaluate a range of generative models, including several GPT-based variants and open-source large language models using a variety of prompts, including “liar” prompts intended to produce reasonable but incorrect answers. For retrieval benchmarks, we use the MS MACRO dev set, the TREC Deep Learning 2019 dataset, and the TREC Deep Learning 2020 dataset. Our experimental results support the adaption of standard benchmarks to the evaluation of generated answers.
Loading