The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry

Published: 29 Nov 2023, Last Modified: 29 Nov 2023NeurReps 2023 PosterEveryoneRevisionsBibTeX
Submission Track: Extended Abstract
Keywords: Equivariant Learning, Reinforcement Learning, Robotics
TL;DR: This paper discovers that equivariant models are surprisingly effective in domains with latent or partial symmetries.
Abstract: Extensive work has demonstrated that equivariant neural networks can significantly improve sample efficiency and generalization by enforcing an inductive bias in the network architecture. These applications typically assume that the domain symmetry is fully described by explicit transformations of the model inputs and outputs. However, many real-life applications contain only latent or partial symmetries which cannot be easily described by simple transformations of the input. In these cases, it is necessary to \emph{learn} symmetry in the environment instead of imposing it mathematically on the network architecture. We discover, surprisingly, that imposing equivariance constraints that do not exactly match the domain symmetry is very helpful in learning the true symmetry in the environment. We differentiate between \emph{extrinsic} and \emph{incorrect} symmetry constraints and show that while imposing incorrect symmetry can impede the model's performance, imposing extrinsic symmetry can actually improve performance. We demonstrate that an equivariant model can significantly outperform non-equivariant methods on domains with latent symmetries.
Submission Number: 47
Loading