Keywords: Sleep Stage, Domain Adaptation, Domain-invariant feature learning
Abstract: The generalization of deep learning models for sleep staging across different datasets is severely hindered by domain shift, a critical obstacle for their clinical adoption. We introduce MMUDA, a novel framework that tackles the complex, real-world challenge of Multi-source Multi-channel Unsupervised Domain Adaptation. Our approach learns domain-invariant features from multiple labeled source domains and an unlabeled target domain through a carefully designed architecture. We employ dedicated encoders with channel attention to capture rich temporal context and enhance inter-channel feature fusion. To bridge the domain gap, MMUDA uniquely combines two complementary alignment strategies: Maximum Mean Discrepancy (MMD) explicitly minimizes the distribution discrepancy between domain pairs, while cross-domain contrastive learning (CL) ensures that the aligned features remain class-discriminative. This dual-alignment process is stabilized by a variational autoencoder (VAE) that encourages a more compact latent feature space. Comprehensive evaluations on several public sleep datasets show that MMUDA sets a new state-of-the-art in cross-domain sleep staging without requiring any target domain labels. Furthermore, we confirm its robustness and practical utility in locally collected hospital data. Our code will be released upon acceptance.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 7089
Loading