On the Informativeness of Supervision SignalsDownload PDF

Published: 08 May 2023, Last Modified: 26 Jun 2023UAI 2023Readers: Everyone
Keywords: data annotation, soft labels, representation learning, neural networks, deep learning
TL;DR: We propose a framework that enables comparison of supervision signals to help users optimize data annotation for supervised learning.
Abstract: Supervised learning typically focuses on learning transferable representations from training examples annotated by humans. While rich annotations (like soft labels) carry more information than sparse annotations (like hard labels), they are also more expensive to collect. We use information theory to compare how a number of commonly used supervision signals contribute to representation-learning performance, as well as how their capacity is affected by factors such as the number of labels, classes, dimensions, and noise. Our framework provides theoretical justification for using hard labels in the big-data regime, but richer supervision signals for few-shot learning and out-of-distribution generalization. We validate these results empirically in a series of experiments with over 1 million crowdsourced image annotations and conduct a cost-benefit analysis to establish a tradeoff curve that enables users to optimize the cost of supervising representation learning on their own datasets.
Supplementary Material: pdf
Other Supplementary Material: zip
0 Replies

Loading