Learning Symmetric Representations for Equivariant World ModelsDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: equivariant, symmetry, contrastive loss, world models, transition, representation theory, generalization
Abstract: Encoding known symmetries into world models can improve generalization. However, identifying how latent symmetries manifest in the input space can be difficult. As an example, rotations of objects are equivariant with respect to their orientation, but extracting this orientation from an image is difficult in absence of supervision. In this paper, we use equivariant transition models as an inductive bias to learn symmetric latent representations in a self-supervised manner. This allows us to train non-equivariant networks to encode input data, for which the underlying symmetry may be non-obvious, into a latent space where symmetries may be used to reason about outcomes of actions in a data-efficient manner. Our method is agnostic to the type of latent symmetry; we demonstrate its usefulness over $C_4 \times S_5$ using $G$-convolutions and GNNs, over $D_4 \ltimes (\mathbb{R}^2,+)$ using $E(2)$-steerable CNNs, and over $\mathrm{SO}(3)$ using tensor field networks. In all three cases, we demonstrate improvements relative to both fully-equivariant and non-equivariant baselines.
One-sentence Summary: Non-equivariant networks can be used to learn symmetric features to use with equivariant neural networks in domains with unclear symmetry.
17 Replies

Loading