DuLPA: Dual-Level Prototype Alignment for Unsupervised Domain Adaptation in Activity Recognition from Wearables
Keywords: Human Activity Recognition, wearable sensors, unsupervised domain adaptation, label shift
TL;DR: Unsupervised domain adaptation for cross user human activity recognition from wearable based time series data
Abstract: In wearable human activity recognition (WHAR), models often falter on unseen users due to behavioral and sensor differences.Without target labels, unsupervised domain adaptation (UDA) can help improve cross-user generalization. However, many WHAR UDA methods either pool all source users together or perform one-to-one source–target alignment, ignoring individual differences and risking negative transfer. To address this critical limitation, we propose \textbf{\textit{DuLPA}}—\underline{\textbf{Du}}al-\underline{\textbf{L}}evel \underline{\textbf{P}}rototype \underline{\textbf{A}}lignment  method for unsupervised cross-user domain adaptation. First, it aligns class prototypes between each source user and the target to capture individual variation; a convex reweighting further handles class imbalance. Second, a BLUP-based fusion forms robust global class prototypes by optimally weighting domain-specific ones using estimated within- and between-domain variances. On four public datasets, \textbf{\textit{DuLPA}} outperforms several baselines, improving macro-F1 by 5.34\%.
Submission Number: 100
Loading