Benchmarking Self-Supervised Representation Learning from a million Cardiac Ultrasound images

Published: 2022, Last Modified: 18 Nov 2024EMBC 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Supervised deep learning has become defacto standard for most computer vision and machine learning problems including medical imaging. However, the requirement of having high quality annotations on large number of datasets places a huge overhead during model development. Self-supervised learning(SSL) is a paradigm which leverages unlabelled data to derive common-sense knowledge relying on signals present in the data itself for the learning rather than external supervisory signals. Recent times have seen the emergence of state-of-the-art SSL methods that have shown performance very close to supervised methods with minimal to no supervision on natural image settings. In this paper, we perform a thorough comparison of the performance of the state-of-the-art SSL methods for medical image setting, particularly for the challenging Cardiac view classification from Ultrasound acquisitions. We analyze the effect of data size in both phases of training - pre-text task training and main task training. We compare the performance with a task specific SSL technique based on simple image features and transfer learning ImageNet pre-training.
Loading