Crowdsourcing with Difficulty: A Bayesian Rating Model for Heterogeneous Items

TMLR Paper4987 Authors

28 May 2025 (modified: 16 Jun 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In applied statistics and machine learning, the ``gold standards'' used for training are often biased and almost always noisy. Dawid and Skene's justifiably popular crowdsourcing model adjusts for rater (coder, annotator) sensitivity and specificity, but fails to capture distributional properties of rating data gathered for training, which in turn biases training. In this study, we introduce a general purpose measurement-error model with which we can infer consensus categories by adding item-level effects for difficulty, discriminativeness, and guessability. We further show how to constrain the bimodal posterior of these models to avoid (or if necessary, allow) adversarial raters. We validate our model's goodness of fit with posterior predictive checks, the Bayesian analogue of $\chi^2$ tests, and assess its predictive accuracy using leave-one-out cross-validation. We illustrate our new model with two well-studied data sets, binary rating data for caries in dental X-rays and implication in natural language.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Novi_Quadrianto1
Submission Number: 4987
Loading