Abstract: Active learning (AL) has shown effectiveness in supervised learning studies in computer vision (CV), while its integration with self-supervised learning (SSL) remains underexplored. In our study, we establish the "SSL+AL" sampling framework in remote sensing, incorporating active learning strategies with self-supervised pretraining (SSP) to identify pre-training samples that improve downstream task performance. Our findings indicate that in the context of remote sensing image classification, different pre-training sampling methods can affect the downstream performance results: when freezing features, uncertainty sampling outperforms random sampling when the budget size is larger than 30% of the full dataset, whereas diversity sampling does not demonstrate a significant advantage over other sampling methods, particularly when the pre-training budget size is low.
Loading