Keywords: equivariance, representation-learning, group-theory, symmetry
TL;DR: We prove that an equivariant encoder's latent space must contain the regular representation, and enforce this with a lightweight auxiliary loss without additional learnable parameters to achieve state-of-the-art performance.
Abstract: Equivariant neural networks incorporate symmetries through group actions, embedding them as an inductive bias to improve performance. Prominent methods learn an equivariant action on the latent space, or design architectures that are equivariant by construction. These approaches often deliver strong empirical results but can impose architecture-specific constraints, large parameter counts, and high computational cost. We challenge the paradigm of complex equivariant architectures with a parameter-free approach grounded in representation theory. We prove that for an equivariant encoder over a finite group, the latent space must almost surely contain one copy of the regular representation for each linearly independent data orbit, which we explore with a number of empirical studies. Leveraging this foundational algebraic insight, we impose the regular representation as an inductive bias via an auxiliary loss, adding no learnable parameters. Our extensive evaluation shows that this method matches or outperforms specialized models in several cases, even those for infinite groups. We further validate our choice of the regular representation through an ablation study, showing it consistently outperforms a defining representation baseline.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 19190
Loading