Pseudo-label meta-learner in semi-supervised few-shot learning for remote sensing image scene classification
Abstract: Remote sensing image scene classification (RSISC) greatly benefits from the use of few-shot learning, as it enables the recognition of novel scenes with only a small amount of labeled data. Most previous works focused on learning representations of prior knowledge with scarce labeled data while ignoring the feasibility of using potential information with large amounts of unlabeled data. In this paper, we introduce a novel semi-supervised few-shot pseudo-label propagation method through the introduction of unlabeled knowledge. This approach utilizes the pseudo-loss property generated by the classifier to indirectly reflect the credibility of pseudo-labeled samples. Therefore, we propose a semi-supervised pseudo-loss confidence metric-based method called a pseudolabel meta-learner (PLML) for RSISC. Specifically, we adopt a pseudoloss estimation module to map the pseudo-labeled data obtained from different tasks to a unified pseudo-loss metric space. Then, the distributions of the pseudolosses with both correct and incorrect pseudolabels are fitted by a semi-supervised beta mixture model (ss-BMM). This model can iteratively select high-quality unlabeled data to enhance the self-training effect of the classifier. Finally, to address the problem of shifting pseudo-loss distributions in remote sensing images, a progressive self-training strategy is proposed to mitigate the cumulative error induced by the classifier. Experimental results demonstrate that our proposed PLML approach outperforms the existing alternatives on the NWPU-RESISC45, AID, and UC Merced datasets.
Loading