Abstract: Conventional models for hyperspectral image (HSI) classification usually demand a substantial quantity of labeled training data. However, when labeled training HSIs are unavailable or have different distributions from test HSIs, many classification models tend to exhibit significant performance declines. For the cross-scene HSI classification task, unsupervised domain adaptation (UDA) technique has been widely developed. Existing discrepancy-based or adversarial-based UDA methods may fail to learn discriminative class boundaries when large class distribution shift (CDS) exists. To alleviate this limitation, we propose the model named spatially enhanced refined classifier (SERC), which includes a coarse classifier (CC) and a refined classifier (RC). The RC constructs a memory module to fuse global-spatial and spectral information simultaneously and uses the neighborhood aggregation technique to generate the refined predictions. The refined predictions are then transferred to pseudolabels to train the CC. Thereby, a mutually reinforcing relationship between the two classifiers is established. Furthermore, we propose the class distribution match (CDM) strategy to further alleviate the serious CDS problem. Notably, SERC does not require additional training parameters, which are commonly used in existing UDA methods. Despite its simplicity, SERC achieves outstanding results. Our method has been extensively evaluated on three public HSI datasets and has shown superior performance compared with state-of-the-art (SOTA) approaches. The source code can be found at https://github.com/Wangzs0228/SERC.
Loading