Modality Consistency-Guided Contrastive Learning for Wearable-Based Human Activity Recognition

Published: 01 Jan 2024, Last Modified: 14 Nov 2024IEEE Internet Things J. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In wearable sensor-based human activity recognition (HAR) research, some factors limit the development of generalized models, such as the time and resource consuming, to acquire abundant annotated data, and the interdata set inconsistency of activity category. In this article, we take advantage of the complementarity and redundancy between different wearable modalities (e.g., accelerometers, gyroscopes, and magnetometers), and propose a modality consistency-guided contrastive learning (ModCL) method, which can construct a generalized model using annotation-free self-supervised learning and realize personalized domain adaptation with small amount annotation data. Specifically, ModCL exploits both intramodality and intermodality consistency of the wearable device data to construct contrastive learning tasks, encouraging the recognition model to recognize similar patterns and distinguish dissimilar ones. By leveraging these mixed constraint strategies, ModCL can learn the inherent activity patterns and extract meaningful generalized features across different data sets. To verify the effectiveness of ModCL method, we conduct experiments on five benchmark data sets (i.e., OPPORTUNITY and PAMAP2 as pretraining data sets, while UniMiB-SHAR, UCI-HAR, and WISDM as independent validation data sets). Experimental results show that ModCL achieves significant improvements in recognition accuracy compared with other state-of-the-art methods.
Loading