EEG Motor imagery classification based on a ConvLSTM Autoencoder framework augmented by attention BiLSTM
Abstract: Brain signals have recently gained popularity in brain-computer interface (BCI) systems, providing valuable insights into various brain functionalities. Analyzing brain activity signals can reveal motion imagination, and electroencephalography (EEG) signals are appropriate to recognize motor imagery (MI) tasks. However, MI-EEG classification remains challenging due to the limited spatial resolution of EEG. In this study, we present a pioneering architecture for MI-EEG classification, comprising a Convolutional Long Short-Term Memory Autoencoder (ConvLSTMAE) for efficient feature extraction and an Attention-augmented Bidirectional Long Short-Term Memory (AtBiLSTM) classifier. ConvLSTMAE captures spatiotemporal patterns in EEG signals, producing a compact latent representation. Subsequently, AtBiLSTM, incorporating an attention mechanism to enhance the model's focus on critical signal components, processes the learned representations, effectively capturing bidirectional temporal dependencies. Our method excels in motor imagery classification on the BCI competition IV 2a dataset, achieving an accuracy of 89.70% and a kappa value of 87.96%, outperforming existing methods. We systematically evaluate the impact of STFT, revealing a 10.91% accuracy improvement when transforming temporal signals to the frequency domain. Furthermore, replacing a Support Vector Machine with AtBiLSTM enhances accuracy by 17.74%, showcasing the effectiveness of our designed architecture. This research contributes to advancing EEG-based MI classification, offering promise for neuroscientific insights and the development of efficient brain-computer interfaces. The proposed AtBi-ConvLSTMAE framework not only addresses limitations in MI-EEG classification but also exhibits superior performance, lower standard deviations, and improved generalizability.
External IDs:dblp:journals/mta/MirzaeiGB25
Loading