EMBEDDING DOMAIN-SPECIFIC INVARIANCES INTO CONTRASTIVE LEARNING FOR CALIBRATION-FREE NEURAL DECODING
Keywords: Steady-State Visual Evoked Potentials (SSVEP), Brain–Computer Interface (BCI), Calibration-Free Neural Decoding, Contrastive Learning, Domain Adaptation, CORAL (Correlation Alignment), Task-Related Component Analysis (TRCA), Filter-Bank Canonical Correlation Analysis (FBCCA), EEG Signal Processing, Harmonic-Invariant Representation.
TL;DR: We propose DATCAN, a calibration-free framework for SSVEP-based neural decoding that combines harmonic-aware contrastive learning, CORAL alignment, and TRCA/FBCCA fusion to achieve robust short-window performance (>100 bits/min at 1 s).
Abstract: **Steady-state visual evoked potentials (SSVEPs)** provide a high-throughput testbed for neural decoding, yet real-world deployment is hindered by *subject-specific calibration*. We address this challenge by proposing **DATCAN**, a framework that *embeds domain-specific invariances into contrastive learning* while *aligning feature statistics without supervision*. DATCAN integrates three complementary components: (i) a **harmonic-aware contrastive objective** that encodes *frequency-locked physiological priors* directly into the embedding space, (ii) **second-order covariance alignment (CORAL)** that stabilizes cross-subject transfer through *closed-form adaptation*, and (iii) **adaptive late fusion** of interpretable classical heads (*Task-Related Component Analysis, TRCA*; *Filter-Bank Canonical Correlation Analysis, FBCCA*) with *normalized weighting*. Contrastive pairing uses only *source-subject labels*: **positives** are other-subject trials evoked by the *same known stimulus frequency (including harmonics)*, while **negatives** come from *different frequencies*. At inference, the **TRCA/FBCCA heads** score each frequency class, mapping embeddings to symbols *without any target-subject calibration*. Evaluated under strict *leave-one-subject-out transfer*, **DATCAN achieves robust short-window decoding**, sustaining **100 bits/min information transfer rate at 1 s** - a regime where **existing calibration-free baselines** substantially underperform. *Ablation and interpretability analyses confirm that each module contributes principled gains, yielding physiologically grounded, subject-invariant representations.* Beyond Electroencephalogram (EEG), our results highlight a *general recipe for calibration-free domain adaptation*: **encode physics-driven invariances** in contrastive learning, **align covariances without labels**, and **integrate interpretable ensembles**. This blueprint extends naturally to other *sequential and biosignal domains* where *distribution shift and data scarcity* remain central obstacles. \
*Reproducibility: Code, preprocessing scripts, and evaluation notebooks with fixed seeds are provided in the supplementary material (anonymous).*
Supplementary Material: zip
Primary Area: learning on time series and dynamical systems
Submission Number: 20424
Loading