Abstract: While semi-supervised learning (SSL) has demonstrated remarkable success in natural image segmentation, tackling medical image segmentation with limited annotated data remains a highly relevant and challenging research problem. Many existing approaches rely on a shared network for learning from both labeled and unlabeled data, facing difficulties in fully exploiting labeled data due to interference from unreliable pseudo-labels. Additionally, they suffer from degradation in model quality resulting from training with unreliable pseudo-labels. To address these challenges, we propose a novel training strategy that uses two distinct decoders-one for labeled data and another for unlabeled data. This decoupling enhances the model’s ability to fully leverage the knowledge embedded within the labeled data. Moreover, we introduce an additional decoder, referred to as the “worst-case-aware decoder," which indirectly assesses potential worst case scenario that might emerge from pseudo-label training. We employ adversarial training of the encoder to learn features aimed at avoiding this worst case scenario. Our experimental results on three medical image segmentation datasets demonstrate that our method shows improvements in range of 5.6%–28.10% (in terms of dice score) compared to the state-of-the-art techniques. The source code is available at https://github.com/thesupermanreturns/decoupled.
External IDs:dblp:conf/miccai/DasGCAYSL24
Loading