Abstract: Facial Expression Recognition (FER) and Human Activity Recognition (HAR) are key areas in wearable computing, with applications in healthcare, fitness, and human-computer interaction. This study explores the use of Emteq’s OCOsense smart glasses for FER and HAR, which incorporate optomyographic sensors and a 9-axis inertial measurement unit (IMU). Using deep learning (DL) models, including CNNs, ConvLSTM, and STResNet variants, the study compares the performance of FER for six expression classes and HAR for seven different activities. We investigate the impact of the ConvBoost framework on model performance, extending its use beyond HAR with IMU data and introducing end-to-end learning for FER using OCOsense smart glasses. Our findings confirm that ConvBoost effectively improves performance of complex overfitting-prone models. Furthermore, the results also reveal that using ConvBoost in simpler models, such as the 1-dimensional convolutional neural network (CNN 1D), leads to a decrease in F1-score. The best performing models, Attention ConvLSTM for FER and ConvLSTM for HAR (in combination with ConvBoost), achieved a macro F1-score of 0.78 and 0.88, respectively. These results highlight the potential of smart glasses for FER and HAR, establishing benchmark results for future advancements in glasses-based wearable computing.
Loading