More room for language: Investigating the effect of retrieval on language models

Published: 01 Jan 2024, Last Modified: 14 May 2025NAACL (Short Papers) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Retrieval-augmented language models pose a promising alternative to standard language modeling. During pretraining, these models search in a corpus of documents for contextually relevant information that could aid the language modeling objective. We introduce an ‘ideal retrieval’ methodology to study these models in a fully controllable setting. We conduct an extensive evaluation to examine how retrieval augmentation affects the behavior of the underlying language model. Among other things, we observe that these models: (i) save substantially less world knowledge in their weights, (ii) are better at understanding local context and inter-word dependencies, but (iii) are worse at comprehending global context.
Loading