Closed-Loop Data Transcription to an LDR via Minimaxing Rate ReductionDownload PDF

Published: 28 Jan 2022, Last Modified: 22 Oct 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Linear discriminative representation, Generative model
Abstract: This work proposes a new computational framework for automatically learning a closed-loop transcription between multi-class multi-dimensional data and a linear discriminative representation (LDR) that consists of multiple multi-dimensional linear subspaces. In particular, we argue that the optimal encoding and decoding mappings sought can be formulated as the equilibrium point of a two-player minimax game between the encoder and decoder. A natural utility function for this game is the so-called rate reduction, a simple information-theoretic measure for distances between mixtures of subspace-like Gaussians in the feature space. Our formulation avoids expensive evaluating and minimizing approximated distances between arbitrary distributions in either the data space or the feature space. To a large extent, conceptually and computationally this new formulation unifies the benefits of Auto-Encoding and GAN and naturally extends them to the settings of learning a both discriminative and generative representation for complex multi-class and multi-dimensional real-world data. Our extensive experiments on many benchmark datasets demonstrate tremendous potential of this framework: under fair comparison, visual quality of the learned decoder and classification performance of the encoder is competitive and often better than existing methods based on GAN, VAE or a combination of both.
One-sentence Summary: A new computational framework for automatically learning a closed-loop transcription between multi-class multi-dimensional data and a linear discriminative representation (LDR) that consists of multiple multi-dimensional linear subspaces.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2111.06636/code)
8 Replies

Loading