Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited LabelsDownload PDF

22 Sept 2022 (modified: 14 Oct 2024)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Contrastive Learning, Self-supervised Learning, Semi-supervised Learning, Medical Image Segmentation
TL;DR: This paper proposes a semi-supervised contrastive learning framework that seamlessly assembles three effective principles: tailness, consistency, and diversity, which outperforms existing semi-supervised and fully-supervised competitors.
Abstract: Recent studies on contrastive learning have achieved remarkable performance solely by leveraging few labels in the context of medical image segmentation. Existing methods mainly focus on instance discrimination and invariant mapping (i.e., pulling positive samples closer and negative samples apart in the feature space). However, they face three common pitfalls: (1) tailness: medical image data usually follows an implicit long-tail class distribution. Blindly leveraging all pixels in training hence can lead to the data imbalance issues, and cause deteriorated performance; (2) consistency: it remains unclear whether a segmentation model has learned meaningful and yet consistent anatomical features due to the intra-class variations between different anatomical features; and (3) diversity: the intra-slice correlations within the entire dataset have received significantly less attention. This motivates us to seek a principled approach for strategically making use of the dataset itself to discover similar yet distinct samples from different anatomical views. In this paper, we introduce a novel semi-supervised medical image segmentation framework termed Mine yOur owN Anatomy (MONA), and make three contributions. First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features, mainly due to lacking the supervision signal. We show two simple solutions towards learning invariances -- through the use of stronger data augmentations and nearest neighbors. Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features in an unsupervised manner. Lastly, our extensive results on three benchmark datasets with different labeled settings validate the effectiveness of our proposed MONA which achieves new state-of-the-art under different labeled settings. Perhaps most impressively, MONA trained with 10% labeled -- for the first time -- outperforms the supervised counterpart on all three datasets. MONA makes minimal assumptions on domain expertise, and hence constitutes a practical and versatile solution in medical image analysis. Codes will be made available to the public.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Machine Learning for Sciences (eg biology, physics, health sciences, social sciences, climate/sustainability )
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/mine-your-own-anatomy-revisiting-medical/code)
4 Replies

Loading