SSL Based Encoder Pretraining for Segmenting a Heterogeneous Chronic Wound Image Database with Few Annotations
Abstract: Segmentation is crucial in medical imaging, but obtaining a sufficient quantity of annotated data is challenging, limiting the development of high-performing deep learning models. Self-supervised learning (SSL) strategies offer a promising solution to address this lack of annotation. One such strategy, Dinov2 for Distillation with NO labels, enabled the creation of the vast LVD-142M database and the training of encoders, whose weights are now freely accessible. However, clinical images may not be well represented in LVD-142M. Thus, in the context of scarce annotated clinical data, we evaluate the benefits of a generic encoder pre-trained with DINO on LVD-142M versus a lighter one. We also explore the effect of SSL DINO pre-training strategy directly on the target dataset. We measure the impact of available label quantity on segmentation performances. Results show, in a context with few annotated images, specific and lightweight encoder can outperform generically pre-trained DINO one. Furthermore, DINO SSL pre-training on specific dataset is beneficial for small encoder.
Loading