Keywords: model evaluation, long-context language models, working memory limitations, contextual interference, In-context learning, proactive interference, robustness & reliability, top‑down control, cognitive‑science–inspired evaluation
TL;DR: LLMs fail to retrieve recent updates when earlier input gets in the way, revealing working-memory-like limits beyond a model's context length.
Abstract: Information retrieval in Large Language Models (LLMs) is increasingly recognized as intertwined with generation capabilities rather than mere lookup. While longer contexts are often assumed to improve retrieval, the effects of intra-context interference remain understudied. To address this, we adapt the proactive interference (PI) paradigm from cognitive science, where earlier information disrupts recall of newer updates. In humans, susceptibility to such interference is inversely linked to working memory capacity. We introduce PI-LLM, an evaluation that sequentially streams co-referenced key–value updates and queries only the final values. Although these final values are clearly positioned just before the query, LLM retrieval accuracy declines log-linearly toward zero as co-referenced interference accumulates; errors arise from retrieving previously overwritten values. Attempts to mitigate interference via prompt engineering (e.g., instructing models to ignore earlier input) yield limited success. These findings reveal a fundamental constraint on LLMs’ ability to disentangle interference and flexibly manipulate information, suggesting a working memory bottleneck beyond mere context access.
This test advances Needle-in-a-Haystack/MRCR paradigms by eliminating the haystack altogether: By isolating and varying the number of co-referenced “needles,” it directly quantifies interference, revealing a robust log-linear decline in retrieval as interference grows across SOTA models. This calls for approaches that strengthen models’ ability to suppress co-referenced information during retrieval.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 18089
Loading