Improving Equivariant Networks with Probabilistic Symmetry Breaking

Published: 17 Jun 2024, Last Modified: 12 Jul 2024ICML 2024 Workshop GRaMEveryoneRevisionsBibTeXCC BY 4.0
Track: Extended abstract
Keywords: equivariance, symmetry, symmetry-breaking, canonicalization
TL;DR: We propose a framework for breaking symmetries, e.g. in generative models' latent spaces, by combining equivariant networks with canonicalization.
Abstract: Equivariance builds known symmetries into neural networks, often improving generalization. However, equivariant networks cannot break self-symmetries present in any given input. This poses an important problem: (1) for prediction tasks on symmetric domains, and (2) for generative models, which must break symmetries in order to reconstruct from highly symmetric latent spaces. Thus, equivariant networks are fundamentally limited when applied to these contexts. To remedy this, we present a comprehensive, probabilistic framework for symmetry-breaking, based on a novel decomposition of equivariant *distributions*. Concretely, this decomposition yields a practical method for breaking symmetries in any equivariant network via randomized *canonicalization*, while retaining the inductive bias of symmetry. We experimentally show that our framework improves the performance of group-equivariant methods in modeling lattice spin systems and autoencoding graphs.
Submission Number: 90
Loading