Keywords: Equivariance, geometric deep learning, transformers, equivariant transformers, dynamic convolution, Rotary Position Embeddings, RoPE
TL;DR: We introduce the Platonic Transformer, a method that gives standard transformers a built-in understanding of 3D geometry at no extra computational cost.
Abstract: While widespread, Transformers lack inductive biases for geometric symmetries common in science and computer vision. Existing equivariant methods often sacrifice the efficiency and flexibility that make Transformers so effective through complex, computationally intensive designs. We introduce the Platonic Transformer to resolve this trade-off. By defining attention relative to reference frames from the Platonic solid symmetry groups, our method induces a principled weight-sharing scheme. This enables combined equivariance to continuous translations and Platonic symmetries, while preserving the exact architecture and computational cost of a standard Transformer. Furthermore, we show that this attention is formally equivalent to a dynamic group convolution, which reveals that the model learns adaptive geometric filters and enables a highly scalable, linear-time convolutional variant. Across diverse benchmarks in computer vision (CIFAR-10), 3D point clouds (ScanObjectNN), and molecular property prediction (QM9, OMol25), the Platonic Transformer achieves competitive performance by leveraging these geometric constraints at no additional cost.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 21309
Loading