Leveraging multi-class background description and token dictionary representation for hyperspectral anomaly detection
Abstract: Hyperspectral anomaly detection is aimed at distinguishing between background and anomalous regions in hyperspectral images, and plays a crucial role iSn various applications. However, the existing deep learning methods face challenges when dealing with complex background distributions and insufficient training data. In this article, we propose a novel multi-class background description transformer network (MBDTNet) to address the problems of imprecise background distribution learning and poor anomaly detection. Firstly, we propose an image-level end-to-end data augmentation method based on self-supervised training, which enhances the diversity and quantity of the training samples through adaptive clustering and spatial masking strategies. Secondly, based on the principles of low-rank representation, a sparse self-attention mechanism based on token dictionary representation is designed to help the model focus on key background features and guide the model in recognizing anomalies. Finally, a token dictionary learning mechanism for multi-class background description is established by combining Gaussian discriminant analysis with a conditional distance function, and intra-class and inter-class losses are designed to enhance the model’s ability to separate background and anomalies. Experiments on five benchmark datasets demonstrate the superiority and applicability of the proposed MBDTNet method, showing that it outperforms the current state-of-the-art hyperspectral anomaly detection methods.
Loading