Aggregating Crowdsourced Labels in Subjective Domains


May 29, 2018 OpenReview Anonymous Preprint Blind Submission readers: everyone Show Bibtex
  • Abstract: Supervised learning problems---particularly those involving social data---are often subjective. That is, human readers, looking at the same data, might come to legitimate but completely different conclusions based on their personal experiences. Yet in machine learning settings feedback from multiple human annotators is often reduced to a single ``ground truth'' label, thus hiding the true, potentially rich and diverse interpretations of the data found across the social spectrum. We explore the rewards and challenges of discovering and learning representative distributions of the labeling opinions of a large human population. A major, critical cost to this approach is the number of humans needed to provide enough labels not only to obtain representative samples but also to train a machine to predict representative distributions on unlabeled data. We propose aggregating label distributions over, not just individuals, but also data items, in order to maximize the costs of humans in the loop. We test different aggregation approaches on state-of-the-art deep learning models. Our results suggest that careful label aggregation methods can greatly reduce the number of samples needed to obtain representative distributions.
  • Keywords: Subjective domains, machine learning, humans in the loop, crowdsourcing
  • TL;DR: We study the problem of learning to predict the underlying diversity of beliefs present in supervised learning domains.
0 Replies