Barron's Theorem for Equivariant NetworksDownload PDF

Published: 07 Nov 2022, Last Modified: 05 May 2023NeurReps 2022 PosterReaders: Everyone
Keywords: Equivariance, invariance, symmetry, universality, approximation, Barron
TL;DR: We extend Barron’s Theorem for efficient approximation to invariant neural networks, in the cases of invariance to a permutation subgroup or the rotation group.
Abstract: The incorporation of known symmetries in a learning task provides a powerful inductive bias, reducing the sample complexity of learning equivariant functions in both theory and practice. Group-symmetric architectures for equivariant deep learning are now widespread, as are accompanying universality results that verify their representational power. However, these symmetric approximation theorems suffer from the same major drawback as their original non-symmetric counterparts: namely, they may require impractically large networks. In this work, we demonstrate that for some commonly used groups, there exist smooth subclasses of functions -- analogous to Barron classes of functions -- which can be efficiently approximated using invariant architectures. In particular, for permutation subgroups, there exist invariant approximating architectures whose sizes, while dependent on the precise orbit structure of the function, are in many cases just as small as the non-invariant architectures given by Barron's Theorem. For the rotation group, we define an invariant approximating architecture with a new invariant nonlinearity, which may be of independent practical interest, that is similarly just as small as its non-invariant counterparts. Overall, we view this work as a first step towards capturing the smoothness of invariant functions in invariant universal approximation, thereby providing approximation results that are not only invariant, but efficient.
7 Replies

Loading