Keywords: Multi-Hop Reasoning, Latent Reasoning, Interpretability of LLMs
Abstract: Large language models (LLMs) perform well on multi-hop reasoning, yet how they internally compose multiple facts remains unclear. Recent work proposes *hop-aligned circuit hypothesis*, suggesting that bridge entities are computed sequentially across layers before later-hop answers. Through systematic analyses on real-world multi-hop queries, we show that this hop-aligned assumption does not generalize: later-hop answer entities can become decodable earlier than bridge entities, a phenomenon we call *layer-order inversion*, which strengthens with total hops. To explain this behavior, we propose a *probabilistic recall-and-extract* framework that models multi-hop reasoning as broad probabilistic recall in shallow MLP layers followed by selective extraction in deeper attention layers. This framework is empirically validated through systematic probing analyses, reinterpreting prior layer-wise decoding evidence, explaining chain-of-thought gains, and providing a mechanistic diagnosis of multi-hop failures despite correct single-hop knowledge.
Code is available at https://anonymous.4open.science/r/Layer-Order-Inversion/.
Paper Type: Long
Research Area: Special Theme (conference specific)
Research Area Keywords: explainability of NLP models, probing, knowledge tracing/discovering/inducing, model editing
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 4930
Loading