Tracing Facts or just Copies? A critical investigation of the Competitions of Mechanisms in Large Language Models

Published: 14 Jul 2025, Last Modified: 14 Jul 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper presents a reproducibility study examining how Large Language Models (LLMs) manage competing factual and counterfactual information, focusing on the role of attention heads in this process. We attempt to reproduce and reconcile findings from three recent studies by Ortu et al. [13], Yu, Merullo, and Pavlick [17] and McDougall et al. [7] that investigate the competition between model-learned facts and contradictory context information through Mechanistic Interpretability tools. Our study specifically examines the relationship between attention head strength and factual output ratios, evaluates competing hypotheses about attention heads' suppression mechanisms, and investigates the domain specificity of these attention patterns. Our findings suggest that attention heads promoting factual output do so via general copy suppression rather than selective counterfactual suppression, as strengthening them can also inhibit correct facts. Additionally, we show that attention head behavior is domain-dependent, with larger models exhibiting more specialized and category-sensitive patterns.
Certifications: Reproducibility Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: Daphne Ippolito
Submission Number: 4326
Loading