Understanding Optimization Challenges when Encoding to Geometric StructuresDownload PDF

Published: 07 Nov 2022, Last Modified: 05 May 2023NeurReps 2022 PosterReaders: Everyone
Keywords: representation learning, autoencoders, homeomorphism, topological, equivariant, lie groups, isometry
TL;DR: Imposing geometric inductive biases in representation learning can lead to topological obstruction during training, but these can be circumvented using an loss designed to encourage isometric embedding.
Abstract: Geometric inductive biases such as spatial curvature, factorizability, or equivariance have been shown to enable learning of latent spaces which better reflect the structure of data and perform better on downstream tasks. Training such models, however, can be a challenging task due to the topological constraints imposed by encoding to such structures. In this paper, we theoretically and empirically characterize obstructions to training autoencoders with geometric latent spaces. These include issues such as singularity (e.g. self-intersection), incorrect degree or winding number, and non-isometric homeomorphic embedding. We propose a method, isometric autoencoder, to improve the stability of training and convergence to an isometric mapping in geometric latent spaces. We perform an empirical evaluation of this method over 2 domains, which demonstrates that our approach can better circumvent the identified optimization problems.
4 Replies

Loading