On Universality of Deep Equivariant Networks

ICLR 2026 Conference Submission22747 Authors

20 Sept 2025 (modified: 19 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Geometric Deep Learning, Theory for Equivariant Neural Networks, Expressiveness, Approximation Theory
TL;DR: We show that depth is decisive for universality. We derive separation-constrained universality results for invariant and equivariant networks, unifying and extending prior work.
Abstract: Universality results for equivariant neural networks remain rare. Those that do exist typically hold only in restrictive settings: either they rely on regular or higher-order tensor representations, leading to impractically high-dimensional hidden spaces, or they target specialized architectures, often confined to the invariant setting. This work develops a more general account. For invariant networks, we establish a universality theorem under separation constraints, showing that the addition of a fully connected readout layer secures approximation within the class of separation-constrained continuous functions. For equivariant networks, where results are even scarcer, we demonstrate that standard separability notions are inadequate and introduce the sharper criterion of *entry-wise separability*. We show that with sufficient depth or with the addition of appropriate readout layers, equivariant networks attain universality within the entry-wise separable regime. Together with prior results showing the failure of universality for shallow models, our findings identify depth and readout layers as a decisive mechanism for universality, additionally offering a unified perspective that subsumes and extends earlier specialized results.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 22747
Loading