Continual Facial Features Transfer for Facial Expression Recognition

Rahul Singh Maharjan, Lorenzo Bonicelli, Marta Romeo, Simone Calderara, Angelo Cangelosi, Rita Cucchiara

Published: 01 Jul 2025, Last Modified: 06 Nov 2025IEEE Transactions on Affective ComputingEveryoneRevisionsCC BY-SA 4.0
Abstract: Facial Expression Recognition (FER) models based on deep learning mostly rely on a supervised train-once-test-all approach. These approaches assume that a model trained on an in-the-wild facial expression dataset with one type of domain distribution will perform well on a test dataset with a domain distribution shift. However, facial images in real-world can be from different domain distributions from which the model has been trained. However, re-training models on only new domain distributions will severely affect the performance of the previous domain. Re-training on all previous and new data can improve overall performance but is computationally expansive. In this study, we oppose the train-once-test-all approach and propose a buffer-based continual learning approach to enhance the performance of multiple in-the-wild datasets. We propose a model that continually leverages attention to important facial features from the pre-trained model to improve performance in multiple datasets. We validated our model using split-in-the-wild datasets where the dataset is provided to the model in an incremental setting instead of all at once. Furthermore, to evaluate the model performance, we continually used three in-the-wild datasets representing different domains (Domain-FER). Extensive experiments on these datasets reveal that the proposed model achieves better results than other Continual FER models.
Loading