Identifying Incorrect Annotations in Multi-label Classification DataDownload PDF

Published: 04 Mar 2023, Last Modified: 14 Oct 2024ICLR 2023 Workshop on Trustworthy ML PosterReaders: Everyone
Keywords: label errors, multi-label classification, data-centric AI, image tagging
TL;DR: This paper introduces a new approach for detecting label errors (i.e. incorrect annotations) in multi-label datasets.
Abstract: In multi-label classification, each example in a dataset may be annotated as belonging to one or more classes (or none of the classes). Example applications include image (or document) tagging where each possible tag either applies to a particular image (or document) or not. With many possible classes to consider, data annotators are likely to make errors when labeling such data in practice. Here we consider algorithms for finding mislabeled examples in multi-label classification datasets. We propose an extension of the Confident Learning framework to this setting, as well as a label quality score that ranks examples with label errors much higher than those which are correctly labeled. Both approaches can utilize any trained classifier. Here we demonstrate that our methodology empirically outperforms many other methods for label error detection. Applying our approach to CelebA, we estimate that over 30,000 images in this dataset are incorrectly tagged.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/identifying-incorrect-annotations-in-multi/code)
0 Replies

Loading