Beyond Parallelism: Synergistic Computational Graph Effects in Multi-Head Attention

Published: 23 Sept 2025, Last Modified: 27 Nov 2025NeurReps 2025 ProceedingsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Attention, Transformer, Graph Theory, Directed Acyclic Graph, Computational Graph, Markov Chain, Mixing Time, Minimax Fidelity, Signal Propagation
Abstract: Multi-head attention powers Transformer networks, the primary deep learning architecture behind the success of large language models~(LLMs). Yet, the theoretical advantages of multi-head versus single-head attention, beyond mere parallel processing, remain underexplored. In this paper, we reframe multi-head attention as a system of potentially synergistic computational graphs, where each head functions as a feedforward directed acyclic graph (DAG) with a common sink state. We provide intuition and preliminary theoretical analysis of mixing time and minimax fidelity in this framework. Our results show that multi-head attention can synergistically enhance information propagation, yielding faster mixing times and minimax fidelity amplification under specific head-diversity conditions. Finally, we train single-head and multi-head Transformers, each with the same total number of parameters, on sequence manipulation tasks and empirically verify the predicted effects. The code is available at https://github.com/haitzsaezdeocariz/beyondparallelism.
Submission Number: 47
Loading