Toward Understanding Supervised Representation Learning with RKHS and GANDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Reproducing kernel Hilbert space, GAN, neural network, statistical guarantee
Abstract: The success of deep supervised learning depends on its automatic data representation abilities. A good representation of high-dimensional complex data should enjoy low-dimensionally and disentanglement while losing as little information as possible. In this work, we give a statistical understanding of how deep representation goals can be achieved with reproducing kernel Hilbert spaces (RKHS) and generative adversarial networks (GAN). At the population level, we formulate the ideal representation learning task as that of finding a nonlinear map that minimizes the sum of losses characterizing conditional independence (with RKHS) and disentanglement (with GAN). We estimate the target map at the sample level nonparametrically with deep neural networks. We prove the consistency in terms of the population objective function value. We validate the proposed methods via comprehensive numerical experiments and real data analysis in the context of regression and classification. The resulting prediction accuracies are better than state-of-the-art methods.
One-sentence Summary: This paper provides an understanding for the success of deep learning via a deep representation learning with RKHS and GAN.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Reviewed Version (pdf): https://openreview.net/references/pdf?id=x7J7X5FFW7
5 Replies

Loading