Pervasive Label Errors in Test Sets Destabilize Machine Learning BenchmarksDownload PDF

Published: 29 Jul 2021, Last Modified: 20 Oct 2024NeurIPS 2021 Datasets and Benchmarks Track (Round 1)Readers: Everyone
Keywords: label errors, datasets, benchmarks, data-centric, confident learning, noisy labels, dataset curation
TL;DR: We discover that label errors are pervasive across 10 popular benchmark test sets used in most ML research; we release corrected test sets and study when these label errors destabilize benchmarks.
Abstract: We identify label errors in the test sets of 10 of the most commonly-used computer vision, natural language, and audio datasets, and subsequently study the potential for these label errors to affect benchmark results. Errors in test sets are numerous and widespread: we estimate an average of at least 3.3% errors across the 10 datasets, where for example label errors comprise at least 6% of the ImageNet validation set. Putative label errors are identified using confident learning algorithms and then human-validated via crowdsourcing (51% of the algorithmically-flagged candidates are indeed erroneously labeled, on average across the datasets). Traditionally, machine learning practitioners choose which model to deploy based on test accuracy -- our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets. Surprisingly, we find that lower capacity models may be practically more useful than higher capacity models in real-world datasets with high proportions of erroneously labeled data. For example, on ImageNet with corrected labels: ResNet-18 outperforms ResNet-50 if the prevalence of originally mislabeled test examples increases by just 6%. On CIFAR-10 with corrected labels: VGG-11 outperforms VGG-19 if the prevalence of originally mislabeled test examples increases by just 5%. Test set errors across the 10 datasets can be viewed at https://labelerrors.com and all label errors can be reproduced by https://github.com/cleanlab/label-errors.
Supplementary Material: zip
URL: https://github.com/cleanlab/label-errors
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/pervasive-label-errors-in-test-sets/code)
10 Replies

Loading