Domain Adaptation for Classifying Spontaneous Smile Videos

Published: 01 Jan 2024, Last Modified: 01 Mar 2025DICTA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Distinguishing spontaneous and posed smiles has become an exciting topic due to its potential application in several sectors. However, it is a very challenging task, even for humans. Past researchers have proposed several semi and fully automatic approaches for smile classification. These approaches have explored both feature-based engineering and end-to-end deep neural network-based strategies. One major issue with past methods is the degradation of performance when deploying the model in a data domain different from the training domain, as smile patterns are different across diverse groups (e.g., young, adult, male, and female). In this paper, we present an end-to-end domain adaptation model to address these problems. We explore a new unsupervised domain adaptation application for smile veracity recognition. We propose an identity-invariant learning objective to align the training (source) data knowledge to the testing (target) data. Our approach penalizes identity information hidden in the feature space by enhancing sufficient distinctiveness among different smile phase features while maintaining inter-class cohesion. We have used UVA-NEMO, MMI, SPOS, and BBC datasets to validate the performance of our model and found that our domain adaptation approach outperforms the existing models by achieving state-of-the-art performance.
Loading