SleepMG: Multimodal Generalizable Sleep Staging with Inter-modal Balance of Classification and Domain Discrimination

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Sleep staging is crucial for sleep tracking and health assessment. Polysomnography (PSG), containing multiple modalities such as electroencephalography, electrooculography, electromyography, and electrocardiography, is the fundamental means of sleep staging. However, due to performance differences in both classification and domain discrimination across modalities in PSG, existing domain generalization methods face a dilemma of modal imbalance. To balance inter-modal differences and achieve highly accurate cross-domain sleep staging, we propose SleepMG, a Multimodal Generalizable Sleep staging method. SleepMG assesses the classification and domain discrimination performances of each modality and further defines the modal performance metrics by calculating the variance between the performance score and the average performance of each modality. Guided by these metrics, the gradients of the classifier and domain discriminator are adaptively adjusted, placing greater emphasis on poorly-balanced modalities while reducing emphasis on well-balanced modalities. Experimental results on public sleep staging datasets demonstrate that SleepMG outperforms state-of-the-art sleep staging methods, effectively balancing multiple modalities as evidenced by the visual experiment of modal imbalance degree. Our code will be released after formal publication.
Primary Subject Area: [Engagement] Emotional and Social Signals
Secondary Subject Area: [Content] Multimodal Fusion
Relevance To Conference: Sleep staging plays a crucial role in sleep tracking and health assessment. It relies on polysomnography (PSG) signals, which encompass diverse multimodal physiological information, such as electroencephalography (EEG), electrocardiography (ECG), electromyography (EMG), and more. The classification of sleep stages relies on the collective representation of multiple modalities of physiological signals. To improve multimodal sleep staging, we integrate the multimodal learning framework with domain adversarial techniques. Furthermore, we explicitly assess and balance different modalities' classification and domain discrimination capabilities to address the issue of inter-modality differences. Experiments on multimodal sleep datasets show that our method outperforms state-of-the-art sleep staging methods, effectively balancing multiple modalities.
Supplementary Material: zip
Submission Number: 631
Loading