SelfMTL: Self-Supervised Meta-Transfer Learning via Contrastive Representation for Hyperspectral Target Detection
Abstract: Hyperspectral target detection (HTD) is an approach to identify targets of interest in a scene by utilizing prior target spectra. Existing deep learning-based HTD methods usually need to generate a large number of samples for network training, and these generated samples often suffer from distortion. In addition, most of the methods can only be applied to a single scene. To address these issues, this study proposes a self-supervised meta-transfer learning (SelfMTL) method to improve the generalization ability and adaptability of the model through contrastive representation. First, labeled source data, which contains rich feature information, is utilized to train the global-local spectral contrastive learning (GLSL) module by randomly constructing positive and negative pairs from different land covers for the classification task, aiming to effectively discriminate the similarities and differences between spectra. Then, a small sample (only one target-background pair) fine-tuning is utilized to transfer the pretrained GLSL to different target detection (TD) tasks. Finally, a novel adaptive spatial-spectral enhancement (ASSE) module is proposed, which takes into account the joint learning constraints of spatial and spectral information to obtain the final detection result map. The experimental results on four real hyperspectral images (HSIs) datasets verify the superiority of SelfMTL in comparison to many classical and SOTA HTD methods. The codes are available at https://github.com/ShissHAN/SelfMTL.
External IDs:dblp:journals/tgrs/LuoSQGFL25
Loading