ContrastSense: Domain-invariant Contrastive Learning for In-the-Wild Wearable Sensing

Gaole Dai, Huatao Xu, Hyungjun Yoon, Mo Li, Rui Tan, Sung-Ju Lee

Published: 21 Nov 2024, Last Modified: 04 Jan 2026Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous TechnologiesEveryoneRevisionsCC BY-SA 4.0
Abstract: Existing wearable sensing models often struggle with domain shifts and class label scarcity. Contrastive learning is a promising technique to address class label scarcity, which however captures domain-related features and suffers from low-quality negatives. To address both problems, we propose ContrastSense, a domain-invariant contrastive learning scheme for a realistic wearable sensing scenario where domain shifts and class label scarcity are presented simultaneously. To capture domain-invariant information, ContrastSense exploits unlabeled data and domain labels specifying user IDs or devices to minimize the discrepancy across domains. To improve the quality of negatives, time and domain labels are leveraged to select samples and refine negatives. In addition, ContrastSense designs a parameter-wise penalty to preserve domaininvariant knowledge during fine-tuning to further maintain model robustness. Extensive experiments show that ContrastSense outperforms the state-of-the-art baselines by 8.9% on human activity recognition with inertial measurement units and 5.6% on gesture recognition with electromyography when presented with domain shifts across users. Besides, when presented with different kinds of domain shifts across devices, on-body positions, and datasets, ContrastSense achieves consistent improvements compared with the best baselines.
External IDs:doi:10.1145/3699744
Loading