Do different self-supervised learning tasks for medical imaging result in better downstreaming performance?

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: medical imaging, tumor segmentation, swin transformer, representation analysis
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We analyzed the performance of two state-of-the-art vision transformer architectures, Swin UNETR and SMIT, on tumor segmentation. We found that SMIT, pretrained on more contrastive oriented tasks, learn better representations for downstreaming.
Abstract: Vision transformers pretrained on relatively large and diverse medical images have demonstrated capabilities to generate highly accurate segmentations of normal tissues from various anatomic sites and medical imaging modalities. However, there are limited studies on their capability for reliably segmenting tumors. Hence, we comprehensively evaluate two state-of-the-art transformers, SwinUNETR and SMIT, for segmenting primary and metastatic lung and ovarian cancers arising from two completely different organs. We systematically evaluate the ability of these two pretrained models, after finetuning, to the individual tumor types in terms of in-distribution (3D computed tomography (CT) using similar imaging acquisitions), as well as out-of-distribution (OOD) data consisting of CTs with different anatomical regions and different acquisitions. Our work, primarily on transformers, highlights the need to analyze previously forgotten metrics for 3D medical imaging to mitigate misdiagnoses and ensure patient safety.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7853
Loading