Abstract: Query expansion methods powered by large language models (LLMs) have demonstrated effectiveness in zero-shot retrieval tasks. These methods assume that LLMs can generate hypothetical documents that, when incorporated into a query vector, enhance the retrieval of real evidence. However, we challenge this assumption by investigating whether knowledge leakage in benchmarks contributes to the observed performance gains. Using fact verification as a testbed, we analyzed whether the generated documents contained information entailed by ground truth evidence and assessed their impact on performance. Our findings indicate that performance improvements occurred consistently only for claims whose generated documents included sentences entailed by ground truth evidence. This suggests that knowledge leakage may be present in these benchmarks, potentially inflating the perceived performance of query expansion methods, particularly in real-world scenarios that require retrieving niche or novel knowledge.
Paper Type: Short
Research Area: Information Retrieval and Text Mining
Research Area Keywords: Query expansion, fact verification, knowledge leakage
Contribution Types: NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 1380
Loading