Homomorphism AutoEncoder --- Learning Group Structured Representations from Observed TransitionsDownload PDF

Published: 07 Nov 2022, Last Modified: 05 May 2023NeurReps 2022 OralReaders: Everyone
Keywords: Representation Learning, Unsupervised Learning, Sensorimotor, Interaction based Learning, Autoencoder, Group Representation, Equivariant Representation, Homomorphism, Disentanglement
TL;DR: We propose an autoencoder equipped with a group representation of observed transitions linearly acting on its latent space, trained on 2-step transitions to jointly learn the group representation and structured representation of observations.
Abstract: It is crucial for agents, both biological and artificial, to acquire world models that veridically represent the external world and how it is modified by the agent's own actions. We consider the case where such modifications can be modelled as transformations from a group of symmetries structuring the world state space. We use tools from representation learning and group theory to learn latent representations that account for both sensory information and the actions that alters it during interactions. We introduce the Homomorphism AutoEncoder (HAE), an autoencoder equipped with a learned group representation linearly acting on its latent space trained on 2-step transitions to implicitly enforce the group homomorphism property on the action representation. Compared to existing work, our approach makes fewer assumptions on the group representation and on which transformations the agent can sample from. We motivate our method theoretically, and demonstrate empirically that it can learn the correct representation of the groups and the topology of the environment. We also compare its performance in trajectory prediction with previous methods.
4 Replies

Loading