AccEmo: Accelerometer Based Human Emotion Recognition for Eyewear Devices

Published: 2025, Last Modified: 26 Jan 2026IEEE Trans. Mob. Comput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the increasing popularity of virtual reality applications, there is an increasing demand for more interactive entertainment, learning, social interactions, and other activities on eyewear devices. Recognizing users’ emotion and providing reliable feedback can significantly improve the immersive experience for users. However, previous works in emotion recognition required modifications to existing eyewear devices and the integration of additional sensors, or relied on specialized sensors in expensive commercial-grade eyewear devices, making direct deployment on existing consumer-grade eyewear devices challenging. In this paper, we propose AccEmo, the first system that analyzes the data from the built-in accelerometer sensor on eyewear devices to accurately recognize human emotion. AccEmo first employs signal processing technologies to process raw accelerometer data, and then uses a binary classification network to determine whether the accelerometer data is influenced by emotional changes. Subsequently, AccEmo proposes a network architecture based on residual neural network and channel-wise attention mechanism as a universal feature extractor to extract complex features related to human emotions from the accelerometer data. Finally, AccEmo uses personalized classifiers to achieve emotion recognition for different users. Extensive performance evaluation of AccEmo across diverse users demonstrates an exceptional average accuracy of 94.3%. Additionally, the robustness of AccEmo is validated through evaluations in various scenarios, yielding promising results.
Loading