Optimization Dynamics of Equivariant and Augmented Neural Networks

TMLR Paper3153 Authors

08 Aug 2024 (modified: 18 Oct 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We investigate the optimization of neural networks on symmetric data, and compare the strategy of constraining the architecture to be equivariant to that of using data augmentation. Our analysis reveals that the relative geometry of the admissible and the equivariant layers, respectively, plays a key role. Under natural assumptions on the data, network, loss, and group of symmetries, we show that compatibility of the spaces of admissible layers and equivariant layers, in the sense that the corresponding orthogonal projections commute, implies that the sets of equivariant stationary points are identical for the two strategies. If the linear layers of the network also are given a unitary parametrization, the set of equivariant layers is even invariant under the gradient flow for augmented models. Our analysis however also reveals that even in the latter situation, stationary points may be unstable for augmented training although they are stable for the manifestly equivariant models.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Efstratios_Gavves1
Submission Number: 3153
Loading