Invariant and equivariant architectures via learned polarization

ICLR 2026 Conference Submission24302 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Invariant, equivariant, polarization, universal approximation
TL;DR: Separating sets of invariants on high-dimensional spaces are created from low-dimensional invariants to create invariant and equivariant neural networks
Abstract: We present a theoretical framework for constructing invariant and equivariant neural network architectures based on polarization methods from classical invariant theory. Existing approaches to enforcing symmetries in machine learning models often rely on explicit knowledge of the invariant ring of a group action, which is computationally demanding or intractable for many groups. Our framework leverages polarization to generate separating sets of invariant polynomials on high-dimensional group representations from those of lower-dimensional ones. We establish conditions under which separating sets can be obtained via standard, simple, or cheap polarization and demonstrate how these results can be combined with recent advances on separating families to yield small, expressive sets of invariants. This construction ensures universal approximation of continuous invariant functions while reducing computational complexity. We further discuss the implications for designing scalable invariant and equivariant architectures and identify settings where polarization provides a practical advantage, particularly for high-dimensional representations of finite groups.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 24302
Loading