Understanding Label Bias in Single Positive Multi-Label LearningDownload PDF

01 Mar 2023 (modified: 13 Apr 2025)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: computer vision, deep learning, single positive label, multi-label, label bias
TL;DR: Label bias matters for single positive multi-label learning.
Abstract: Annotating data for multi-label classification is prohibitively expensive because every category of interest must be confirmed to be either present or absent. Recent work on single positive multi-label (SPML) learning has shown that it is possible to train effective multi-label classifiers using only one positive label per image. The standard benchmarks for SPML are derived from traditional multi-label classification datasets by retaining one positive label for each training example (chosen uniformly at random) and discarding all other labels. However, in realistic annotation settings it is not likely that positive labels are chosen uniformly at random. In this work, we explore the effect of label bias in SPML.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/understanding-label-bias-in-single-positive/code)
6 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview