Cross-Modal Motor Representation Learning

Published: 2024, Last Modified: 12 Jun 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Learning motor representations in brains presents a challenge due to the entanglement of motor-related and unrelated information within neural imaging data. This study introduces a cross-modal learning algorithm that utilizes electromyogram (EMG) muscle cues to refine the learning of electroencephalogram (EEG) motor representations. The algorithm begins with original EEG representations from a baseline motor classification model. Subsequently, EMG muscle cues are learned to decompose the original EEG representations into motor-related and unrelated components. The decomposition process is achieved by aligning the EMG representations more closely with motor-related components and less with unrelated ones. Experimental results on a self-collected multi-modal dataset show the proposed algorithm leads to a performance enhancement of approximately 4% across various algorithms compared with the original EEG representations in motor classification. This advancement demonstrates the algorithm’s effectiveness in isolating motor-related information from complex brain activities. The innovative use of muscle cues for EEG motor characteristic learning opens new possibilities for incorporating cross-modal learning in creating more accurate brain-computer interfaces.
Loading