Regularized Autoencoders for Isometric Representation LearningDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 PosterReaders: Everyone
Keywords: Autoencoders, Manifold Learning, Regularization, Geometry, Distortion
Abstract: The recent success of autoencoders for representation learning can be traced in large part to the addition of a regularization term. Such regularized autoencoders ``constrain" the representation so as to prevent overfitting to the data while producing a parsimonious generative model. A regularized autoencoder should in principle learn not only the data manifold, but also a set of geometry-preserving coordinates for the latent representation space; by geometry-preserving we mean that the latent space representation should attempt to preserve actual distances and angles on the data manifold. In this paper we first formulate a hierarchy for geometry-preserving mappings (isometry, conformal mapping of degree $k$, area-preserving mappings). We then show that a conformal regularization term of degree zero -- i.e., one that attempts to preserve angles and relative distances, instead of angles and exact distances -- produces data representations that are superior to other existing methods. Applying our algorithm to an unsupervised information retrieval task for CelebA data with 40 annotations, we achieve 79\% precision at five retrieved images, an improvement of more than 10\% compared to recent related work. Code is available at https://github.com/Gabe-YHLee/IRVAE-public.
One-sentence Summary: Regularized Autoencoders that simultaneously learn data manifold and a set of latent space coordinates that preserves the geometry of the learned manifold.
25 Replies

Loading