Topological AutoencodersDownload PDF

25 Sept 2019 (modified: 22 Oct 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Abstract: We propose a novel approach for preserving topological structures of the input space in latent representations of autoencoders. Using persistent homology, a technique from topological data analysis, we calculate topological signatures of both the input and latent space to derive a topological loss term. Under weak theoretical assumptions, we can construct this loss in a differentiable manner, such that the encoding learns to retain multi-scale connectivity information. We show that our approach is theoretically well-founded and that it exhibits favourable latent representations on a synthetic manifold as well as on real-world image data sets, while preserving low reconstruction errors.
Code: https://osf.io/abuce/?view_only=f16d65d3f73e4918ad07cdd08a1a0d4b
Keywords: Topology, Deep Learning, Autoencoders, Persistent Homology, Representation Learning, Dimensionality Reduction, Topological Machine Learning, Topological Data Analysis
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:1906.00722/code)
Original Pdf: pdf
9 Replies

Loading