Invertible Manifold Learning for Dimension ReductionDownload PDF

28 Sept 2020 (modified: 22 Oct 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: Manifold Learning, Inverse Model, Representation Learing
Abstract: It is widely believed that a dimension reduction (DR) process drops information inevitably in most practical scenarios. Thus, most methods try to preserve some essential information of data after DR, as well as manifold based DR methods. However, they usually fail to yield satisfying results, especially in high-dimensional cases. In the context of manifold learning, we think that a good low-dimensional representation should preserve the topological and geometric properties of data manifolds, which involve exactly the entire information of the data manifolds. In this paper, we define the problem of information-lossless NLDR with the manifold assumption and propose a novel two-stage NLDR method, called invertible manifold learning ($\textit{inv-ML}$), to tackle this problem. A $\textit{local isometry}$ constraint of preserving local geometry is applied under this assumption in $\textit{inv-ML}$. Firstly, a homeomorphic $\textit{sparse coordinate transformation}$ is learned to find the low-dimensional representation without losing topological information. Secondly, a $\textit{linear compression}$ is performed on the learned sparse coding, with the trade-off between the target dimension and the incurred information loss. Experiments are conducted on seven datasets with a neural network implementation of $\textit{inv-ML}$, called $\textit{i-ML-Enc}$, which demonstrate that the proposed $\textit{inv-ML}$ not only achieves invertible NLDR in comparison with typical existing methods but also reveals the characteristics of the learned manifolds through linear interpolation in latent space. Moreover, we find that the reliability of tangent space approximated by the local neighborhood on real-world datasets is key to the success of manifold based DR algorithms. The code will be made available soon.
One-sentence Summary: We propose a novel invertible dimension reduction process for manifold learning with a neural network implementation, and explore the inherent difficulty of manifold learning in real-world secnarios by the way.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2010.04012/code)
Reviewed Version (pdf): https://openreview.net/references/pdf?id=-eSOV-oYrW
16 Replies

Loading