Exploring the Practicality of Generative Retrieval on Dynamic Corpora

Published: 31 May 2024, Last Modified: 21 Jun 2024Gen-IR_SIGIR24EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Information Retrieval, Generative Retrieval, Dynamic Corpora, Continual Pretraining, Adaptability of IR models
TL;DR: We conduct a comprehensive comparison of the adaptability to changing knowledge and practicality of Dual Encoders and Generative Retrievals on dynamic corpora.
Abstract: Benchmarking the performance of information retrieval (IR) is mostly conducted with a fixed set of documents (static corpora). However, in realistic scenarios, this is rarely the case and the documents to be retrieved are constantly updated and added. In this paper, we focus on Generative Retrievals (GR), which apply autoregressive language models to IR problems, and explore their adaptability and robustness in dynamic scenarios. We also conduct an extensive evaluation of computational and memory efficiency, crucial factors for real-world deployment of IR systems handling vast and ever-changing document collections. Our results on the StreamingQA benchmark demonstrate that GR is more adaptable to evolving knowledge (+ 4 -- 11\%), robust in learning knowledge with temporal information, and efficient in terms of inference FLOPs ($\times 2$), indexing time ($\times 6$), and storage footprint ($\times 4$) compared to Dual Encoders (DE), which are commonly used in retrieval systems. Our paper highlights the potential of GR for future use in practical IR systems within dynamic environments.
Submission Number: 6
Loading