Behind RoPE: How Does Causal Mask Encode Positional Information?

ICLR 2026 Conference Submission16225 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Causal Mask, Positional Encoding, RoPE
TL;DR: We prove that the causal mask can induce position-dependent pattern in attention scores, similar to positional encodings. The causal mask distorts the relative attention pattern of RoPE to non-relative, and this pattern is commonly observed in LLMs.
Abstract: While explicit positional encodings such as RoPE are a primary source of positional information in Transformer decoders, the causal mask also provides positional information. In this work, we prove that the causal mask can induce position-dependent patterns in attention scores, even without parameters or causal dependency in the input. Our theoretical analysis indicates that the induced attention pattern tends to favor nearby query-key pairs, mirroring the behavior of common positional encodings. Empirical analysis confirms that trained models exhibit the same behavior, with learned parameters further amplifying these patterns. Notably, we found that the interaction of causal mask and RoPE distorts RoPE's relative attention score patterns into non-relative ones. We consistently observed this effect in modern large language models, suggesting the importance of considering the causal mask as a source of positional information alongside explicit positional encodings.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 16225
Loading