Abstract: The spread of fake news on social media is a rapidly growing problem that is impacting both the general public and the government. Current methods for detecting false news often fail to take full advantage of the multi-modal information that is available, which can lead to inconsistent decisions due to modality ambiguity. Moreover, existing methods often overlook the unique information pertaining to view-specific details that could significantly boost their discriminative power and overall performance. To this end, we introduce a novel model, MFVIEW (Multi-Modal Fake News Detection with View-Specific Information Extraction), that unifies the modeling of multi-modal and view-specific information within a single framework. Specifically, the proposed model consists of a View-Specific Information Extractor that incorporates an orthogonal constraint within the shared subspace, enabling the utilization of discriminative information unique to each modality, and an Ambiguity Cross-Training Module that detects inherent ambiguity across different modalities by capturing their correlation. Extensive experiments on two publicly available datasets show that MFVIEW outperforms state-of-the-art fake news detection approaches with an accuracy of 91.0% on the Twitter dataset and 93.3% on the Weibo dataset.
Loading