Abstract: Semi-supervised semantic segmentation (SSS), which allows for learning a better model with a small fraction of labeled samples and a large amount of unlabeled ones, is valuable yet challenging in medical image analysis. Recent works (e.g., UniMatch) have found that weak-to-strong consistency via augmentation is especially conducive to SSS training. However, they inadvertently introduce cognitive biases for unlabeled images, making it difficult to segment accurately near the target’s edge regions. In this paper we present a lightweight bias-correct module to self-correct these mistakes between the strong perturbations. Based on it, we design a new framework named BcMatch by plugging it into UniMatch to reduce such cognitive biases stemming from incorrect pseudo-labels for unlabeled images. Moreover, we also introduce a bias correction loss, which works in tandem with the consistency loss to guide the model learning, focusing more on the edge regions of the targets. Experiments on the representative semi-supervised segmentation dataset, ACDC, demonstrate our BcMatch surpasses UniMatch by a large margin, attaining new state-of-the-art performance. The code is at https://github.com/zhangyan498/BcMatch.
Loading