Abstract: Fake news detection has become a hot topic. Most multimodal fake news detection models only focus on the semantic correlation between single modalities and often ignore the semantic differences between single modalities, which limited the performance. To deal with the above problem, this paper proposes a multimodal fake news detection model (AFUG), which fully pays attention to the semantic correlation between each modal information by designing a cross-modal fusion module. The self-supervised unimodal label generation model is also added to constrain the overall model optimization. In order to focus on samples with highly differentiated modal information, we design an adaptive weight adjustment strategy to guide the model’s learning of unimodal information. Extensive experiments on two datasets demonstrate the effectiveness of our AFUG.
Loading