On the Role of Riemannian Metric in Isometric Representation Learning

15 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Autoencoders, Manifold, Geometry, Riemannian metric, Isometric representation learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We take the initial stride to identify the role of the Riemannian metric in isometric representation learning.
Abstract: Under the manifold hypothesis, isometric representation learning aims to discover a set of latent space coordinates that preserve the manifold's geometry. The geometry of the manifold needs to be specified prior, and typically, it is defined with a metric inherited from the ambient data space. Most existing methods adopt the identity metric assumption of the ambient space (namely, Euclidean data space), a choice that is likely one of the most reasonable in unsupervised contexts. However, this unsupervised selection of the identity metric inherently lacks the capacity to capture the semantic understanding that humans perceive from data. The question of how to formulate a data-semantic-aware Riemannian metric for the ambient space remains unanswered, particularly in the context of isometric representation learning. In this work, we propose a method for constructing \textit{neural feature-based metrics} capable of capturing data semantics by adopting knowledge from any pre-trained feature extraction model. Then we conduct a comparative study on the effects of the following Riemannian metrics in isometric representation learning: (i) the identity metric, (ii) the inverse density-based metric -- which is an existing unsupervised metric construction method --, and (iii) the proposed neural feature-based metrics. Experiments with standard image datasets \textit{MNIST}, \textit{Fashion MNIST}, and \textit{CIFAR10} show that the neural feature-based metrics produce data-semantic-aware representations -- where data with similar semantics are located nearby -- and in some cases are able to discover unseen hierarchical structures in the datasets.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 109
Loading