Leveraging Multimodal Fusion for Advanced Fake News Detection

ACL ARR 2024 June Submission4543 Authors

16 Jun 2024 (modified: 25 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Detecting multimodal fake news is imperative for maintaining social media security and safeguarding community well-being. Existing detection approaches often fall short in adequately considering the nuanced context of social media and fail to fully utilize various modalities such as metadata, resulting in a significant gap. In this paper, we propose a novel and efficient model that integrates both textual global features and local features. This model captures semantic relationships within the text and utilizes a global corpus representation to align with the complex context of social media. We further enhance feature connectivity by employing a multilevel fusion technique that integrates visual and metadata information. Extensive experiments demonstrate that our method achieves state-of-the-art performance across all classification tasks using Fakeddit, the largest multimodal fake news dataset, underscoring its effectiveness.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: rumor/misinformation detection, multimodal applications
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 4543
Loading