Manifold-Matching Autoencoders

ICLR 2026 Conference Submission14378 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: manifold learning, autoencoders, topology preservation, dimensionality reduction, representation learning, geometric regularization, unsupervised learning
TL;DR: We align autoencoder latent spaces with pre-computed embeddings using distance-based regularization to control topology. We show benefits in downstream tasks such as generation of synthetic images.
Abstract: We propose Manifold-Matching Autoencoders (MMAEs), a simple yet effective framework that aligns autoencoder latent spaces with precomputed geometric references. This is accomplished by using distance-based regularization to match latent and reference distance matrices, enabling the same architecture to achieve different data representations by simply changing the reference embedding. We demonstrate that MMAEs achieve scalable topological control in high-dimensional settings where existing methods become computationally intractable. One key finding is that aligning with PCA yields unexpected benefits: MMAEs achieve SOTA preservation of the original data structure, comparable to sophisticated topological autoencoders, while maintaining significantly better reconstruction quality and more efficient computation. When combining with VAEs, the present regularization has the effect of concentrating variance in fewer dimensions. This balance between structure preservation, variance concentration, and reconstruction fidelity enables superior generative capabilities, including clearer interpolations and more effective discovery of semantically meaningful latent directions for attribute manipulation.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 14378
Loading