Semantic-guided Contrastive Learning for EEG Multimodal Decoding of Listening and Watching

19 Sept 2025 (modified: 12 Feb 2026)ICLR 2026 Conference Desk Rejected SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Semantic decoding, contrastive learning, EEG, multimodal
TL;DR: We propose a unified contrastive learning framework for EEG-based semantic decoding that adapts to both auditory and visual modalities.
Abstract: Semantic decoding poses a fundamental challenge in brain-computer interfaces (BCIs) aiming for naturalistic brain-machine communication. Current approaches remain modality-specific (e.g., tailored for visual, auditory, or linguistic stimuli), lacking a universal framework that generalizes across diverse modalities. To address this, we propose Semantic-Guided Contrastive Learning (Semantic-CL), a unified framework for EEG-based semantic decoding that adapts to multiple modalities—including auditory and visual—without architectural modifications. The framework leverages semantic-guided soft contrastive learning to align EEG representations using stimulus semantic similarity metrics, augmented by inter-subject contrastive alignment to harmonize neural patterns across subjects. Evaluated on two modality-distinct benchmarks—SEED-DV (dynamic video) and Broderick (natural speech)—Semantic-CL achieves the state-of-the-art performance in semantic decoding, especially in the more challenging cross-subject settings. This study establishes a modality-agnostic EEG semantic decoding framework, enabling deployable BCIs in naturalistic contexts. Our code is available at https://anonymous.4open.science/r/Semantic-CL-Cross_subject-anonymous-DBD7.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 16285
Loading