Boosting the Performance of Lightweight HAR Models with Attention and Knowledge Distillation

Published: 01 Jan 2024, Last Modified: 06 Mar 2025IE 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: While applying deep learning models has significantly enhanced the performance of sensor-based human activity recognition (HAR), their deployment on resource-limited mobile and wearable devices presents challenges. While reducing model size is an alternative, it often leads to performance degradation. This study introduces the integration of attention mechanisms and knowledge distillation techniques to enhance the recognition performance of lightweight sensor-based human activity recognition models. Employing three activity recognition datasets (Opportunity, Wisdm, and Sensors), initial experiments investigate the individual effects of response-based knowledge distillation and attention mechanisms on the performance of the lightweight model. Results demonstrate that attention surpasses distillation in recognition success. To further improve performance, we introduce two combined approaches: response and attention-based distillation and response-based distillation with attention directly applied to the lightweight model. Both approaches outperform attention alone, while the latter, by directly incorporating attention into the student without requiring an attention-based teacher, eliminates the need for a pre-trained attention-based teacher. Evaluating resource consumption, the performance-boosted lightweight model is compared to a moderate-size model, requiring approximately 3-4 times more flops and parameters. The improved lightweight model outperformed the moderate-size model across all datasets regarding recognition performance.
Loading