Keywords: multi-label learning, complementary label leaning, weakly supervised learning
Abstract: Multi-label complementary label learning (MLCLL) is a weakly supervised paradigm that addresses multi-label learning (MLL) tasks using complementary labels (i.e., irrelevant labels) instead of relevant labels. Existing methods typically adopt an unbiased risk estimator (URE) under the assumption that complementary labels follow a uniform distribution. However, this assumption fails in real-world scenarios due to instance-specific annotation biases, making URE-based methods ineffective under such conditions. Furthermore, existing methods underutilize label correlations inherent in MLL. To address these limitations, we propose ComRank, a ranking loss framework for MLCLL, which encourages complementary labels to be ranked lower than non-complementary ones, thereby modeling pairwise label relationships. Theoretically, our surrogate loss ensures Bayes consistency under both uniform and biased cases. Experiments demonstrate the effectiveness of our method in MLCLL tasks. The code is available at https://github.com/JellyJamZhu/ComRank.
Primary Area: General machine learning (supervised, unsupervised, online, active, etc.)
Submission Number: 17201
Loading