Abstract: The proliferation of fake news on social media platforms, facilitated by the development of the Internet, has become a pressing social issue, intensifying the urgency of detecting its diverse multi-modal forms. However, current methods are unable to verify the validity of the extracted multimodal features, ignore the problem of interaction between multimodal content, and fail to learn valid cross-modal features. In this paper, we took a look into new multi-modal learning methods for representation and fusion in fake news detection. A two-branch adversarial network is designed to extract different levels of event-irrelevant features, while inter-modal information interaction and intra-modal information enhancement are followed to improve the richness of the features. To improve the interpretability of the model, a multi-task learning methodology based on the variational autoencoder structure is proposed in detail, which redesigns a general loss function to balance competitive submodules, and verifies the effectiveness of the multi-modal features in turn. Finally, by comparing and analyzing the experimental results of different methods, it is demonstrated that the multimodal fake news detection model proposed in this paper can effectively improve the effectiveness of fake news detection.
Loading