From Indirect Object Identification to Syllogisms: Exploring Binary Mechanisms in Transformer Circuits

Published: 24 Sept 2025, Last Modified: 24 Sept 2025INTERPLAYEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mechanistic interpretability, Explanation, Circuits Discovery
Abstract: Transformer-based large language models (LLMs) can perform a wide range of tasks, and mechanistic interpretability aims to reverse engineer the components responsible for task completion to understand their behavior. Previous mechanistic interpretability research has primarily focused on linguistic tasks like Indirect Object Identification (IOI). In this paper, we investigate the ability of GPT-2 small to handle binary truth values by analyzing its behavior with syllogistic prompts, such as "Statement A is true. Statement B matches statement A. Statement B is," which requires more complex logical reasoning compared to IOI. Through our analysis of several syllogism tasks of varying difficulty, we identify multiple circuits that explain GPT-2’s logical-reasoning capabilities and uncover binary mechanisms that facilitate task completion, including the ability to produce a negated token that does not appear in the input prompt through negative heads. Our evaluation using a faithfulness metric shows that a circuit comprising five attention heads achieves over 90% of the original model’s performance. By relating our findings to IOI analysis, we provide new insights into the roles of attention heads and MLPs in LLMs. We believe these insights contribute to a broader understanding of model reasoning and benefit future research in mechanistic interpretability.
Public: Yes
Track: Main-Long
Submission Number: 17
Loading