Learning to Teach: Improving Mean Teacher in Semi-supervised Medical Image Segmentation with Dynamic Decay Modulation
Keywords: Meta learning, Medical image segmentation, semi-supervised learning
TL;DR: Apply meta learning to mean teacher framework as a plug and play module.
Abstract: Medical image segmentation is essential in medical diagnostics but is hindered by the scarcity of labeled three-dimensional imaging data, which requires costly expert annotations. Semi-supervised learning (SSL) addresses this limitation by utilizing large amounts of unlabeled data alongside limited labeled samples. The Mean Teacher model, a prominent SSL method, enhances performance by employing an Exponential Moving Average (EMA) of the student model to form a teacher model, where the EMA decay coefficient is critical. However, using a fixed coefficient fails to adapt to the evolving training dynamics, potentially restricting the model's effectiveness. In this paper,
we propose Meta MeanTeacher, a novel framework that integrates meta-learning to dynamically adjust the EMA decay coefficient during training. We approach proposed Dynamic Decay Modulation (DDM) module in our Meta MeanTeacher framework, which captures the representational capacities of both student and teacher models. DDM heuristically learns the optimal EMA decay coefficient by taking the losses of the student and teacher networks as inputs and updating it through pseudo-gradient descent on a meta-objective. This dynamic adjustment allows the teacher model to more effectively guide the student as training progresses.
Experiments on two datasets with different modalities, i.e., CT and MRI, show that Meta MeanTeacher consistently outperforms traditional Mean Teacher methods with fixed EMA coefficients. Furthermore, integrating Meta MeanTeacher into state-of-the-art frameworks like UA-MT, AD-MT, and PMT leads to significant performance enhancements, achieving new state-of-the-art results in semi-supervised medical image segmentation.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4161
Loading