Semantic-Aware Self-Supervised Learning for Remote Sensing Image Segmentation

Published: 01 Jan 2023, Last Modified: 05 Mar 2025IC-NIDC 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Significant progress has been made in the supervised semantic segmentation of remote sensing images (RSI), mainly attributed to the design of models and the large amount of high-quality pixel-by-pixel annotated data. However, pixel-level annotated data is extremely costly and requires specific knowledge. In addition, different application scenarios usually require different target classes of segmentation, which makes the annotated data and the trained models lack generalization. In this paper, we devise a semantic-aware self-supervised learning method termed SASS which can learn potential patterns from massive unlabeled remote sensing images and obtain a dense feature representation with se-mantic information. Besides, an efficient contrastive-based loss function is designed to optimize the proposed SASS. After training, the features extracted by SASS can be borrowed to implement unsupervised semantic segmentation (USSS) of remote sensing images. Experiments show our proposed method surpasses recent works by a large margin, which achieves 55.7% mIoU and 71.89% overall accuracy in Potsdam-3 dataset with the improvement of 5.12% and 6%.
Loading