From Frequency to Temporal: Three Simple Steps Achieve Lightweight High-Performance Motor Imagery Decoding

Yuan Li, Diwei Su, Xiaonan Yang, Xiangcun Wang, Hongxi Zhao, Jiacai Zhang

Published: 2026, Last Modified: 25 Mar 2026IEEE Trans. Biomed. Eng. 2026EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Decoding motor imagery based on electroencephalography (EEG) is limited by high data noise and high model computational complexity. Starting from EEGNet, this study achieves high-accuracy decoding through three steps. First, frequency domain analysis was performed to reveal the frequency modeling patterns of deep learning models. Utilizing prior knowledge from brain science regarding the key frequency bands for motor imagery, we adjusted the convolution kernels and pooling sizes of EEGNet to focus on effective frequency bands. Subsequently, a residual network was introduced to preserve high-frequency detailed features. Finally, temporal convolution modules were used to deeply capture temporal dependencies, significantly enhancing feature discriminability. Experiments were conducted on the BCI Competition IV 2a and 2b datasets. The 2a dataset includes multi-channel data with 22 channels, while the 2b dataset contains low-channel data with only 3 channels, reflecting significant scenario differences. Our method achieved average classification accuracies of 86.23% and 86.75% respectively, surpassing advanced models like EEG-Conformer and EEG-TransNet. Meanwhile, the Multiply–accumulate operations (MACs) were 27.16 M, a reduction of over 50% compared to the comparison models, and the Forward/Backward Pass Size was 14.33 MB. This significantly reduced computational complexity and memory footprint. This paper employs the simplest and most fundamental techniques in its design, highlighting the critical role of brain science knowledge in model development. The proposed method demonstrates broad application potential.
Loading