Keywords: Causal Mask, Positional Encoding, Recency Bias
Abstract: Causal self-attention provides positional information to Transformer decoders.
Prior work has shown that stacks of causal self-attention layers alone induce a positional bias in attention scores toward earlier tokens.
However, this differs from the bias toward later tokens typically observed in Transformer decoders, known as recency bias.
We address this discrepancy by analyzing the interaction between causal self-attention and other architectural components.
We show that stacked causal self-attention layers combined with LayerNorm induce recency bias.
Furthermore, we examine the effects of residual connections and the distribution of input token embeddings on this bias.
Our results provide new theoretical insights into how positional information interacts with architectural components and suggest directions for improving positional encoding strategies.
Paper Type: Short
Research Area: Machine Learning for NLP
Research Area Keywords: generative models
Contribution Types: Model analysis & interpretability, Theory
Languages Studied: None
Submission Number: 7215
Loading