On the Limits of Learning Representations with Label-Based SupervisionDownload PDF

19 Apr 2024 (modified: 13 Mar 2017)ICLR 2017 workshop submissionReaders: Everyone
Abstract: Advances in neural network based classifiers have accelerated the progress of automatic representation learning. Since the emergence of AlexNet, every winning submission of the ImageNet challenge has employed end-to-end representation learning, and due to the utility of good representations for transfer learning, representation learning has become as an important and distinct task from supervised learning. At present, this distinction is inconsequential, as supervised methods are state-of-the-art in learning transferable representations, which are widely transferred to tasks such as evaluating the quality of generated samples. In this work, however, we demonstrate that supervised learning is limited in its capacity for representation learning. Based on an experimentally validated assumption, we show that the existence of a set of features will hinder the learning of additional features. We also show that the total incentive to learn features in supervised learning is bounded by the entropy of the labels. We hope that our analysis will provide a rigorous motivation for further exploration of other methods for learning robust and transferable representations.
Keywords: Theory, Deep learning, Transfer Learning
Conflicts: stanford.edu, tsinghua.edu.cn, duke.edu
7 Replies

Loading