A Circular Argument: Does RoPE need to be Equivariant for Vision?

Published: 23 Sept 2025, Last Modified: 17 Nov 2025UniReps2025EveryoneRevisionsBibTeXCC BY 4.0
Track: Extended Abstract Track
Keywords: RoPE, Equivariance, Vision Transformers, Lie Algebra
TL;DR: We show that RoPE does not appear to need strict shift-equivariance
Abstract: Rotary Positional Encodings (RoPE) have emerged as a highly effective technique for one-dimensional sequences in Natural Language Processing spurring recent progress towards generalizing RoPE to higher-dimensional data such as images and videos. The success of RoPE has been thought to be due to its positional equivariance, i.e. its status as a \textit{relative} positional encoding. In this paper, we mathematically show RoPE to be one of the most general solutions for equivariant positional embedding in one-dimensional data. Moreover, we show Mixed RoPE to be the analogously general solution for $M$-dimensional data, if we require commutative generators -- a property necessary for RoPE's equivariance. However, we question whether strict equivariance plays a large role in RoPE's performance. We propose Spherical RoPE, a method analogous to Mixed RoPE, but assumes non-commutative generators. Empirically, we find Spherical RoPE to have the equivalent or better learning behavior as its equivariant analogues. This suggests that relative positional embeddings are not as important as is commonly believed for vision. We expect this discovery to facilitate future work in positional encodings for vision that are faster and generalize better by removing the preconception that they must be relative.
Submission Number: 125
Loading