Abstract: Complementary-label learning (CLL) is a weakly-supervised learning paradigm that aims to train a multi-class classifier using only complementary labels, which indicate classes to which an instance does not belong. Despite numerous algorithmic proposals for CLL, their practical applicability remains unverified for two reasons. Firstly, these algorithms often rely on assumptions about the generation of complementary labels, and it is not clear how far the assumptions are from reality. Secondly, their evaluation has been limited to synthetically labeled datasets. To gain insights into the real-world performance of CLL algorithms, we developed a protocol to collect complementary labels from human annotators. Our efforts resulted in the creation of four datasets: CLCIFAR10, CLCIFAR20, CLMicroImageNet10, and CLMicroImageNet20, derived from well-known classification datasets CIFAR10, CIFAR100, and TinyImageNet200. These datasets represent the very first real-world CLL datasets, namely CLImage, which are publicly available at: https://github.com/ntucllab/CLImage_Dataset. Through extensive benchmark experiments, we discovered a notable decrease in performance when transitioning from synthetically labeled datasets to real-world datasets. We investigated the key factors contributing to the decrease with a thorough dataset-level ablation study. Our analyses highlight annotation noise as the most influential factor in the real-world datasets. In addition, we discover that the biased-nature of human-annotated complementary labels and the difficulty to validate with only complementary labels are two outstanding barriers to practical CLL. These findings suggest that the community focus more research efforts on developing CLL algorithms and validation schemes that are robust to noisy and biased complementary-label distributions.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=FHkWY4aGsN
Changes Since Last Submission: 1. Cited Gebru et al. (2021) and noted that we used their template for our "Datasheets for Datasets."
2. Merged two related sentences for improved clarity from "broader impact statement".
3. Reviewed and corrected the instructions for "Annotation Task Design and Deployment on Amazon MTurk" in the README file from GitHub repository.
4. Specified the license for our new datasets in the "Datasheets for Datasets."
5. Fixed a typo in the references.
6. Replaced "synthetic dataset" with "synthetically labeled datasets" throughout the abstract and the main text for accuracy.
7. Publicly released the code repository and datasets.
Code: https://github.com/ntucllab/CLImage_Dataset
Assigned Action Editor: ~Takashi_Ishida1
Submission Number: 4381
Loading