MINDFeed: Mutual Information-Guided Single-Network Consistency Learning for Semi-Supervised 3D Medical Image Segmentation
Abstract: Medical image segmentation models based on deep learning require dense voxel-level annotations, which are costly to obtain for 3D medical imaging tasks. To address this limitation, we propose MINDFeed (Mutual Information per Decoder as Feedback), a semi-supervised training pipeline for 3D medical image segmentation. MINDFeed estimates predictive uncertainty via mutual information across stochastic forward passes and uses this signal to adaptively modulate decoder representations as a feedback gate, encouraging consistency in reliable regions while suppressing ambiguous responses. Unlike many prior approaches, MINDFeed does not rely on student–teacher architectures, exponential moving averages, or multiple model instances, thereby maintaining architectural simplicity and training efficiency. We conduct extensive experiments on CT and MRI datasets, covering binary and multi-class segmentation tasks with both single- and multi-modal inputs, and demonstrate that MINDFeed consistently outperforms recent state-of-the-art semi-supervised methods. In addition to improved segmentation performance, MINDFeed exhibits reduced variability among test samples, highlighting its robustness under limited annotation settings.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Gustavo_Carneiro1
Submission Number: 8347
Loading