Bayesian Test-Time Adaptation via Dirichlet feature projection and GMM-Driven Inference for Motor Imagery EEG Decoding

ICLR 2026 Conference Submission17264 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Brain-computer interface, motor imagery, test-time adaptation, Dirichlet distribution, Bayesian inference
TL;DR: BTTA‑DG is a lightweight, gradient‑free Bayesian test‑time adaptation framework that projects EEG model sequential embeddings into a compact Dirichlet space and uses GMM‑driven Bayesian inference to robustly calibrate motor imagery predictions.
Abstract: Generalization in EEG-based motor imagery (MI) brain-computer interfaces (BCIs) is severely hampered by cross-subject and cross-session variability. Although large-scale EEG pretraining has advanced representation learning, their practical deployment is hindered by the need for costly fine-tuning to overcome significant domain shifts. Test-time adaptation (TTA) methods that adapt models during inference offer a promising solution. However, existing EEG-TTA methods either rely on gradient-based fine-tuning (suffering from high computational cost and catastrophic forgetting) or data alignment strategies (failing to capture shifts in deep feature distributions). To address these limitations, we propose BTTA-DG, a novel Bayesian Test-Time Adaptation framework that performs efficient, gradient-free adaptation by directly modeling the distribution of deep features. Our approach first employs a lightweight SincAdaptNet with learnable filters to extract task-specific frequency bands. We then introduce a novel Dirichlet feature projection that maps high-dimensional sequential embeddings onto a compact and interpretable parameter space, effectively capturing the concentration of time-varying predictive evidence. Adaptation is achieved via a GMM-driven Bayesian inference mechanism, which models the historical distribution of these Dirichlet parameters and fuses this evidence with the model's prior predictions to calibrate outputs for the target domain. Extensive experiments show that BTTA‑DG significantly outperforms previous EEG‑TTA methods, achieving state‑of‑the‑art accuracy while running at real‑time speed. Furthermore, visualizations confirm the physiological interpretability of our learned filters and the robust class separability of our Dirichlet feature space.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 17264
Loading