Cross-Subject and Cross-Montage EEG transfer learning with Individual Tangent Space Alignment
Keywords: EEG-based BCIs, Music-based gait rehabilitation, Sensorimotor Entrainment, Rhythmic cueing, Cross-subject variability, Transfer learning, Pre-alignment strategies, Regularised common spatial patterns, Riemannian Geometry
Abstract: Cross-Subject and Cross-Montage EEG transfer learning with Individual Tangent Space Alignment
Introduction: Personalised music-based interventions are a promising frontier in motor rehabilitation, where music acts as both a dynamic external timekeeper for entrainment and stabilisation of motor functions and as a modulator of affective states. Generalisable Brain-Computer Interfaces (BCIs) could extend these interventions across individuals, but inter-subject variability in EEG signals, movement-induced artefacts and motor planning differences hinders the generalisability and requires lengthy calibration processes. Common Spatial Patterns (CSP) spatially filters raw signals to maximize between-class variance but is noise-sensitive due to covariance matrix dependence, while direct covariance-based methods capture spatial relationships but requires non-Euclidean algorithms. Transfer learning mitigates covariate shifts by leveraging prior information, however current strategies are limited by their reliance on Euclidean rather than Riemannian space [1], the use of global rather than subject-specific alignments [2] and their focus on reducing cross-dataset variability to account for limited training dataset but ignores cross-domain scenarios with different channel configurations.
Methods: We introduce Individual Tangent Space Alignment (ITSA) [3], a novel pre-alignment strategy (PS) designed to address inter-subject and cross-domain variability in EEG-based BCI. ITSA incorporates three key steps: (1) subject-specific recentring to preserve individual structures while establishing a common reference point, (2) distribution matching via feature rescaling and (3) a supervised rotational alignment of class-wise mean. We employ a parallel fusion of RCSP-Riemannian features in which spatially filtered and tangent space features are concurrently extracted and horizontally concatenated to obtain input features, maximising class separability by enhancing discriminative features while simultaneously preserving the geometric structure of covariance matrices for more accurate statistical computations. To simulate montage-specific cross-domain experiments, we extract channel subsets from high-density dataset to mimic low-density testing configuration and PCA feature reduction was implemented to ensure dimensionality compatibility between training-testing inputs. We employ a publicly available auditory cueing dataset [4] recording gait signals from subjects adapting to increasing or decreasing rhythmic tempos. A support vector machine was used to classify adaptive versus non-adaptive signals in both leave-one-subject-out cross validation and cross-domain experiment.
Results: Our results demonstrate that the ITSA-RCSP-Riemannian framework significantly improves cross-subject performance in LOSO-CV across both temporal conditions and outperforms two state-of-the-art PS [1][2]. The implementation of our ITSA-RCSP-Riemannian approach mitigated cross-domain variability, retaining high performance when training on high-density configurations and testing on both low-density setups.
Figure 1 shows the LOSO-CV classification results from our cross-domain experiment for temporal conditions. Cross-montage experiments using high-density training dataset of 108 electrodes and testing with trained model using simulated 10-10 and 10-20 montages of 60 and 19 electrodes.
Discussion: Our proposed ITSA with subject-independent recentring and tangent space alignment alongside fused RCSP-Riemannian approach improved generalisability across new subjects and montages. Enhancing BCI generalisability, as demonstrated in our LOSO-CV and cross-montage experiments, is a critical step towards real-world BCI integration alongside music-based interventions.
[1] H. He and D. Wu, ‘Transfer Learning for Brain-Computer Interfaces: A Euclidean Space Data Alignment Approach’, IEEE Trans Biomed Eng, vol. 67, no. 2, pp. 399–410, Feb. 2020, doi: 10.1109/TBME.2019.2913914.
[2] A. Bleuzé, J. Mattout, and M. Congedo, ‘Tangent space alignment: Transfer learning for Brain-Computer Interface’, Front. Hum. Neurosci., vol. 16, Dec. 2022, doi: 10.3389/fnhum.2022.1049985.
[3] N. Lai-Tan, X. Gu, M. G. Philiastides, and F. Deligianni, “Cross-subject and cross-montage EEG transfer learning via individual tangent space alignment and spatial-Riemannian feature fusion,” arXiv preprint arXiv:2508.08216, 2025. [Online]. Available: https://arxiv.org/abs/2508.08216
[4] J. Wagner et al., ‘High-density EEG mobile brain/body imaging data recorded during a challenging auditory gait pacing task’, Sci Data, vol. 6, p. 211, Oct. 2019, doi: 10.1038/s41597-019-0223-2.
Submission Number: 35
Loading