Abstract: Human Context Recognition (HCR) from smart-phone sensor data is an essential task in Context-Aware (CA) systems including those targeting healthcare and security. Two types of smartphone HCR studies (and datasets) have become popular for training HCR models: a) scripted and b) Unscripted/In-the-wild. Supervised machine learning HCR models can achieve good performance on scripted datasets due to their high quality labels but such models generalize poorly to in-the-wild datasets which are more representative of real-world scenarios. In-the-wild datasets are often imbalanced, have missing or wrong labels, with a diversity of phone placements and smartphone models. Lab-to-field approaches try to train HCR models to learn a robust data representation from a high-fidelity, scripted dataset that is used to improve performance on noisy in-the-wild datasets that have similar labels without having to incur the high expense of gathering high-quality labeled dataset. In this paper, leveraging coincident datasets with the same HCR labels collected in separate scripted and unscripted studies, we propose Triplet-based Domain Adaptation for context REcognition (Triple-DARE), a novel lab-to-field neural networks method with three key components: 1) a domain alignment loss to learn domain-invariant embeddings, 2) a classification loss to maintain task-discriminative features, 3) a joint fusion triplet loss designed to increase intra-class compactness and inter-class separation in the embedding space of multi-labeled datasets. In rigorous evaluation, Triple-DARE improved on the F1-score and classification accuracy of state-of-the-art HCR baselines by 6.3% and 4.5%, respectively, and on HCR models with no adaptation by 44.6% and 10.7%, respectively.
0 Replies
Loading