PVCG: Prompt-Based Vision-Aware Classification and Generation for Multi-Modal Rumor Detection

Published: 01 Jan 2024, Last Modified: 06 Feb 2025ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Multi-modal Rumor Detection (MRD) has emerged as a crucial research hotpot due to the continuous rise in the spread of multi-modal information on the Internet. Existing studies frequently employ traditional single-classifier models, which cannot accurately classify challenging positive samples. Moreover, the interaction of multiple modalities typically involves an additional fusion module, which results in a trade-off between the granularity of modality interaction and the complexity of the fusion modules. To address these issues, we present a model called Prompt-based Visionaware Classification and Generation (PVCG), where we use a generator module for the MRD. Notably, the encoder independently handles modality fusion more finely by including image as a soft prompt in text embeddings. Our evaluations on Fakeddit and Pheme corpus demonstrate that our PVCG outperforms the state-of-the-art baselines, showcasing its superior performance on the MRD task.
Loading