Self Supervised Learning in Remote Sensing: Quantifying Approaches Effectiveness Across Downstream Tasks
Abstract: In the remote sensing field, vast amounts of data are available. However, labeling such data is expensive. Self-supervised learning makes it possible to leverage unlabeled data for the training of deep neural network models. This work focuses on the effectiveness of self-supervised pretext tasks for different supervised downstream tasks. Therefore, we compare generative, contrastive, and generative-contrastive pretext tasks across classification and semantic segmentation downstream tasks. Our results show that the contrastive setup is beneficial for remote sensing image classification, whereas the generative-contrastive setup shows the best results for the semantic segmentation downstream task. Therefore, our work indicates that the choice of self-supervised pretext task is an important consideration to optimize downstream task performance.
Loading