Abstract: In hyperspectral cross-scene classification tasks, it is often challenging to obtain target domain samples during the training phase. Therefore, models need to be trained on one or multiple source domains and achieve good generalization performance on unknown target domains, known as domain generalization. The presence of domain shift limits the model’s generalization across different domains, while the unknown target domain makes it difficult to accurately characterize the distribution differences between domains. To address this issue, we propose a generalization network based on nonlinear sample generation. The network divides the sample features into invariant features and variant features and generates samples by applying nonlinear transformations to the variant features. To ensure the quality of the generated samples, we introduce contrastive learning into the model. It ensures consistency in similarity between the generated samples and the source samples while maintaining a certain degree of dissimilarity. Experiments conducted on four cross-domain adaptive scenarios demonstrate the superior performance of our proposed method.
Loading