Abstract: Motor Imagery Brain-Computer Interface (MI-BCI) is a key technology of Brain-Computer Interfaces (BCIs). In practical applications, high-accuracy cross-subject decoding is difficult to achieve due to multi-source heterogeneity caused by individual differences among subjects. Transfer learning technology has been widely applied to try to solve this problem, but existing transfer learning models have neglected the frequency features of EEG signals and the temporal pattern dependency present in high-level features. This restricts the performance and generalization ability of existing cross-subject decoding methods. This study leverages frequency prior knowledge related to motor imagery for data augmentation and designs feature extractor, combined with adversarial learning to achieve cross-subject feature alignment. Unlike traditional adversarial learning frameworks, this work introduces a temporal causal inference module in the classifier to reprocess high-level features, dynamically model the temporal characteristics of EEG signals to reduce bias, and achieve efficient decoding through synchronous training of the discriminator and classifier. Based on a pure convolutional neural network (CNN) architecture, the framework achieved decoding accuracies of 82.64% and 85.51% on the BCI Competition IV 2a and 2b datasets, respectively, surpassing state-of-the-art Transformer architectures (e.g., GAT, BLSAN). These results provide a feasible solution for developing efficient and practical MI-BCI systems and lay a foundation for optimizing cross-subject decoding.
External IDs:dblp:conf/ecai/LiSWZZ25
Loading