Bridging Visual Affective Gap: Borrowing Textual Knowledge by Learning from Noisy Image-Text Pairs

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Visual emotion recognition (VER) is a longstanding field that has garnered increasing attention with the advancement of deep neural networks. Although recent studies have achieved notable improvements by leveraging the knowledge embedded within pre-trained visual models, the lack of direct association between factual-level features and emotional categories, called the ''affective gap'', limits the applicability of pre-training knowledge for VER tasks. On the contrary, the explicit emotional expression and high information density in textual modality eliminate the ''affective gap''. Therefore, we propose borrowing the knowledge from the pre-trained textual model to enhance the emotional perception of pre-trained visual models. We focus on the factual and emotional connections between images and texts in noisy social media data, and propose Partitioned Adaptive Contrastive Learning (PACL) to leverage these connections. Specifically, we manage to separate different types of samples and devise distinct contrastive learning strategies for each type. By dynamically constructing negative and positive pairs, we fully exploit the potential of noisy samples. Through comprehensive experiments, we demonstrate that bridging the ''affective gap'' significantly improves the performance of various pre-trained visual models in downstream emotion-related tasks.
Primary Subject Area: [Engagement] Emotional and Social Signals
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: As a vital facet of human engagement with the world, perceiving emotions through visual cues is progressively emerging as a pivotal challenge on the path toward the next generation of artificial general intelligence. Therefore, it has exhibited broad applications across numerous domains. Due to the ''affective gap'' in the visual modality, visual models can not fully utilize the generalizable pre-trained knowledge. To bridge this gap, we leverage the advantages of language and transfer textual knowledge to visual models by Partitioned Adaptive Contrastive Learning. We validate its effectiveness through extensive experiments on various pre-trained visual models and emotional downstream tasks.
Supplementary Material: zip
Submission Number: 706
Loading