CogMoE: Signal-Quality–Guided Multimodal MoE for Cognitive Load Prediction

Published: 26 Jan 2026, Last Modified: 11 Feb 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Cognitive-load, multi-modality, mixture-of-experts
TL;DR: We propose an adaptive mixture-of-expert framework for cognitive load prediction on multi-modal physiological data.
Abstract: Reliable cognitive load (CL) prediction in real-world settings is fundamentally constrained by the poor and variable quality of physiological signals. In safety-critical tasks such as driving, degraded signal quality can severely compromise prediction accuracy, limiting the deployment of existing models outside controlled lab conditions. To address this challenge, we propose CogMoE, a signal quality–guided Mixture-of-Experts (MoE) framework that dynamically adapts to heterogeneous and noisy inputs. CogMoE flexibly integrates physiological modalities, including EEG, ECG, EDA, and gaze, through quality-aware gating, enabling context-sensitive fusion. The framework operates in two stages: (1) quality-aware multi-modal synchronization and recovery to mitigate artifacts, temporal misalignment, and missing data, and (2) signal-quality-specific expert modeling via a cross-modal MoE transformer that regulates information flow based on signal reliability. To further improve stability, we introduce CORTEX Loss, which balances reconstruction fidelity and expert utilization under noise. Experiments on CL-Drive and ADABase show that CogMoE outperforms strong baselines, delivering consistent improvements across diverse signal qualities.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 13512
Loading