Learning EEG Motor Characteristics via Temporal-Spatial Representations

Published: 01 Jan 2025, Last Modified: 12 Jun 2025IEEE Trans. Emerg. Top. Comput. Intell. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Electroencephalogram (EEG) is a widely used neural imaging technique for modeling motor characteristics. However, current studies have primarily focused on temporal representations of EEG, with less emphasis on the spatial and functional connections among electrodes. This study introduces a novel two-stream model to analyze both temporal and spatial representations of EEG for learning motor characteristics. Temporal representations are extracted with a set of convolutional neural networks (CNN) treated as dynamic filters, while spatial representations are learned by graph neural networks (GNN) using learnable adjacency matrices. At each stage, a res-block is designed to integrate temporal and spatial representations, facilitating a fusion of temporal-spatial information. Finally, the summarized representations of both streams are fused with fully connected neural networks to learn motor characteristics. Experimental evaluations on the Physionet, OpenBMI, and BCI Competition IV Dataset 2a demonstrate the model's efficacy, achieving accuracies of $73.6\%/70.4\%$ for four-class subject-dependent/independent paradigms, $84.2\%/82.0\%$ for two-class subject-dependent/independent paradigms, and 78.5% for a four-class subject-dependent paradigm, respectively. The encouraged results underscore the model's potential in understanding EEG-based motor characteristics, paving the way for advanced brain-computer interface systems.
Loading