FineCLIPER: Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Dynamic Facial Expression Recognition (DFER) is crucial for understanding human behavior. However, current methods exhibit limited performance mainly due to the scarcity of high-quality data, the insufficient utilization of facial dynamics, and the ambiguity of expression semantics, etc. To this end, we propose a novel framework, named Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs (FineCLIPER), incorporating the following novel designs: 1) To better distinguish between similar facial expressions, we extend the class labels to textual descriptions from both positive and negative aspects, and obtain supervision by calculating the cross-modal similarity based on the CLIP model; 2) Our FineCLIPER adopts a hierarchical manner to effectively mine useful cues from DFE videos. Specifically, besides directly embedding video frames as input (low semantic level), we propose to extract the face segmentation masks and landmarks based on each frame (middle semantic level) and utilize the Multi-modal Large Language Model (MLLM) to further generate detailed descriptions of facial changes across frames with designed prompts (high semantic level). Additionally, we also adopt Parameter-Efficient Fine-Tuning (PEFT) to enable efficient adaptation of large pre-trained models (i.e., CLIP) for this task. Our FineCLIPER achieves SOTA performance on the DFEW, FERV39k, and MAFW datasets in both supervised and zero-shot settings with few tunable parameters. Analysis and ablation studies further validate its effectiveness.
Primary Subject Area: [Engagement] Emotional and Social Signals
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: Our focus is in-the-wild Dynamic Facial Expression Recognition (DFER) to discern facial expressions in natural settings, aiding emotion and psychological analysis. Unlike traditional DFER reliant on single video data, we tackle challenges from limited and varied datasets by expanding modalities with face parsing, landmarks, and detailed facial action descriptions for comprehensive modeling. Integrating multi-modal data efficiently, we introduce FineCLIPER, a CLIP-based framework. To boost performance with diverse data, we merge Parameter-Efficient Fine-Tuning (PEFT) and a negative adapter strategy, ensuring both effectiveness and efficiency. We believe these innovations in DFER's multi-modal approach hold promise for advancing multi-modal representation learning.
Supplementary Material: zip
Submission Number: 533
Loading