Abstract: Currently, news content with images and texts on social media is widely spread, prompting significant interest in multi-modal fake news detection. However, existing research in this field focuses on large-scale annotated data to train models. Furthermore, data scarcity characterizes the initial stages of fake news propagation. Hence, addressing the challenge of few-shot multi-modal fake news detection becomes essential. In scenarios of limited data availability, current research inadequately utilizes the information inherent in each modality, leading to underutilization of modal information. To address the above challenges, in the paper, we propose a novel detection approach called Prompt-based Adaptive Fusion(ProAF). Specifically, to enhance the model’s comprehension of news content, we extract supplementary information from two modalities to facilitate timely guidance for model training. Then the model employs adaptive fusion to integrate the output predictions of different prompts during training, effectively enhancing the robust performance of the model. Experimental results on two datasets illustrate that our model surpasses existing methods, representing a significant advancement in few-shot multi-modal fake news detection.
Loading