Understanding the Skill Gap in Recurrent Language Models: The Role of the Gather-and-Aggregate Mechanism

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: SSMs struggle with structured reasoning tasks due to limitations in implementing Gather & Aggregate mechanisms, critical for retrieval and learning.
Abstract: State-space models (SSMs) offer efficient alternatives to Transformers for long sequences, but their fixed-size recurrent state limits capability on algorithmic tasks, such as retrieving past context. In this work, we examine how in-context retrieval operates in Transformer- and SSM-based language models and find that both rely on a Gather-and-Aggregate (G&A) mechanism: a Gather Head extracts relevant information from context, which an Aggregate Head integrates into representation. In both architectures, G&A concentrates in a few heads, forming bottlenecks even for simple retrieval. For example, disabling a single Gather or Aggregate Head in a pruned Llama-3.1-8B impairs retrieving the correct answer letter in MMLU, reducing its accuracy from 66% to 25%. Moreover, this retrieval bottleneck can obscure knowledge demands of tasks as the pruned model succeeds on MMLU with functioning G&A heads yet fails on other knowledge benchmarks. The bottleneck similarly extends to tasks where SSMs typically underperform, like GSM8K, BBH, and dialogue. We show that SSMs' retrieval challenges manifest in these heads, creating smoother attention patterns instead of the sharp transitions effective G&A requires. Thus, the Transformer-SSM retrieval gap exists in just a few heads, rather than the entire language model. % Result 3: Analyzing Hybrid models This suggests a unified explanation for Transformer vs. SSM performance gap while showing how to merge their strengths. We find that pretrained hybrid models, where SSMs are combined with attention layers, delegate the role of Aggregate Heads to attention. Similarly, replacing a single G&A head in a pretrained SSM with an attention variant boosts retrieval and benchmark scores.
Lay Summary: Modern AI systems are powered by language models that learn to understand and generate text by analyzing large amounts of written data. Most successful models today rely on complex attention mechanisms that can look back over everything they’ve seen so far, but this comes with high computational cost. Newer, more efficient models use a compact form of memory, which makes them faster and lighter—but often at the expense of accuracy, especially on tasks that require recalling earlier parts of the input. Our study investigates this gap in ability and finds that both types of models—despite their architectural differences—use a similar strategy to handle retrieval: one part identifies the relevant information, and another integrates it into the model’s final response. We call this the “gather-and-aggregate” mechanism. We show that this retrieval process is handled by just a few key components. If those are disrupted, performance on challenging tasks drops sharply. This helps explain why efficient models underperform and offers a practical solution: combining their strengths with just a few attention-based components can significantly improve results. Our findings provide insight into how language models retrieve information and point to ways to design more efficient and capable AI systems.
Link To Code: https://github.com/goombalab/Gather-and-Aggregate
Primary Area: Deep Learning->Large Language Models
Keywords: LLM;Transformers;Mamba;MMLU;In-context-Learning;mechanistic-interpretability
Submission Number: 3291
Loading