PhasorTransformer: Integrating Rotational Inductive Biases for Complex-Valued Sequence Modeling

TMLR Paper6609 Authors

22 Nov 2025 (modified: 08 Dec 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Deep neural networks typically process complex-valued signals—such as RF waveforms or MRI data—via a convenient approximation: they split the real and imaginary parts into separate, independent channels. This works, but it ignores the underlying mathematics. By treating these components as disjoint, standard architectures become blind to the sig- nal’s algebraic structure, specifically the rotational geometry of the phase. We introduce the PhasorTransformer to correct this misalignment. Instead of avoiding complex arith- metic, our architecture embeds it directly into the attention mechanism. We generalize Rotary Positional Embeddings (RoPE) to the complex plane and apply a Hermitian inner product to derive a strictly equivariant attention layer; this allows the network to handle phase shifts naturally rather than relearning them as separate features. On the Long-Range Arena (Sequential CIFAR-10) and Radio Modulation Classification benchmarks, our ap- proach matches or outperforms state-of-the-art real-valued baselines. Crucially, it achieves these results with up to a 20×reduction in parameters, demonstrating that respecting the holomorphic structure of physical signals provides a massive efficiency advantage.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Jasper_Snoek1
Submission Number: 6609
Loading