Improving Single Positive Multi-label Classification via Knowledge-based Label-weighted Large Loss Rejection

Published: 01 Jan 2023, Last Modified: 15 Nov 2024SoICT 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: It is undeniable that high-quality data plays a crucial role in achieving good outcomes. However, obtaining such a dataset is always challenging, particularly in multi-label classification, where a fully labeled dataset is required in traditional approaches. This challenge has led to the emergence of several effective learning techniques called single positive multi-label learning (SPML), which utilize multi-label training images annotated with only one single positive label. In our work, we propose an effective method that improves the cutting-edge BoostLU baseline by leveraging efficient label-weighted loss and prior knowledge based regularization strategies. Firstly, we present a novel approach for reweighting the contribution of each label to the total loss, giving higher weight to the true positive label while eliminating unreliable pseudo negative labels by assigning them zero weight. This is achieved through the exploration of large per-label losses. Additionally, we introduce an auxiliary loss to regulate the expected value of positive labels, encouraging the model to predict a reasonable number of positive labels per image based on prior knowledge about the dataset. Experimental results across several benchmark datasets showcase the superior performance of our method compared to both the baseline and other state-of-the-art methods.
Loading