Topological Obstructions and How to Avoid Them

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: representation learning, variational autoencoders, homeomorphism, topological, equivariant, lie groups, normalizing flows
TL;DR: Imposing geometric inductive biases in representation learning can lead to topological obstruction during training, but these can be circumvented using multimodal distribution such as normalizing flows as variational distributions.
Abstract: Incorporating geometric inductive biases into models can aid interpretability and generalization, but encoding to a specific geometric structure can be challenging due to the imposed topological constraints. In this paper, we theoretically and empirically characterize obstructions to training encoders with geometric latent spaces. We show that local optima can arise due to singularities (e.g. self-intersection) or due to an incorrect degree or winding number. We then discuss how normalizing flows can potentially circumvent these obstructions by defining multimodal variational distributions. Inspired by this observation, we propose a new flow-based model that maps data points to multimodal distributions over geometric spaces and empirically evaluate our model on 2 domains. We observe improved stability during training and a higher chance of converging to a homeomorphic encoder.
Submission Number: 13971
Loading