On the informativeness of supervision signalsDownload PDF

Published: 21 Nov 2022, Last Modified: 05 May 2023InfoCog @ NeurIPS 2022 SpotlightReaders: Everyone
Keywords: representation learning, deep learning, neural networks, information theory, supervised learning
TL;DR: We analyze how informative different supervision signals are for representation learning.
Abstract: Learning transferable representations by training a classifier is a well-established technique in deep learning (e.g. ImageNet pretraining), but there is a lack of theory to explain why this kind of task-specific pre-training should result in 'good' representations. We conduct an information-theoretic analysis of several commonly-used supervision signals to determine how they contribute to representation learning performance and how the dynamics are affected by training parameters like the number of labels, classes, and dimensions in the training dataset. We confirm these results empirically in a series of simulations and conduct a cost-benefit analysis to establish a tradeoff curve allowing users to optimize the cost of supervising representation learning.
In-person Presentation: yes
0 Replies

Loading