The Potential of Acknowledging the Unknown: Single Positive Multi-label Learning in Medical Image Processing

Published: 27 Apr 2024, Last Modified: 06 Jun 2024MIDL 2024 Short PapersEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Single positive multi-label training, Medical imaging, Noise-robust training, Entropy-maximization loss, Asymmetric pseudo-labeling, Generalized assume negative loss
Abstract: In the wake of high complexity and resource constraints, the manual annotation of the medical images involves a severe trade-off between the annotated labels per sample and the dataset size, besides inducing a high count of false-negative labels due to the underlying intricacies in recognizing all potential pathologies or conditions. Single Positive Multi-Label (SPML) learning aims to address the case of label noise by using only a single positive label per sample for the training, considering the remaining labels as uncertain. As SPML is yet to be fully explored in the realm of medical imaging, therefore, this work focuses on investigating the state-of-the-art SPML loss functions subsuming the Generalized Assume Negative (G-AN) and Entropy Maximization (EM) losses on different training sample counts. Additionally, we investigate the influence of the asymmetric pseudo-labeling with EM loss in minimizing the effect of label uncertainty induced by SPML learning.
Submission Number: 114
Loading