A Multimodal Consistency-Based Self-Supervised Contrastive Learning Framework for Automated Sleep Staging in Patients With Disorders of Consciousness
Abstract: Sleep is a fundamental human activity, and automated sleep staging holds considerable investigational potential. Despite numerous deep learning methods proposed for sleep staging that exhibit notable performance, several challenges remain unresolved, including inadequate representation and generalization capabilities, limitations in multimodal feature extraction, the scarcity of labeled data, and the restricted practical application for patients with disorder of consciousness (DOC). This paper proposes MultiConsSleepNet, a multimodal consistency-based sleep staging network. This network comprises a unimodal feature extractor and a multimodal consistency feature extractor, aiming to explore universal representations of electroencephalograms (EEGs) and electrooculograms (EOGs) and extract the consistency of intra- and intermodal features. Additionally, self-supervised contrastive learning strategies are designed for unimodal and multimodal consistency learning to address the current situation in clinical practice where it is difficult to obtain high-quality labeled data but has a huge amount of unlabeled data. It can effectively alleviate the model's dependence on labeled data, and improve the model's generalizability for effective migration to DOC patients. Experimental results on three publicly available datasets demonstrate that MultiConsSleepNet achieves state-of-the-art performance in sleep staging with limited labeled data and effectively utilizes unlabeled data, enhancing its practical applicability. Furthermore, the proposed model yields promising results on a self-collected DOC dataset, offering a novel perspective for sleep staging research in patients with DOC.
Loading