LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation

ACL ARR 2024 June Submission5467 Authors

16 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The rise of multimodal misinformation on social platforms poses significant challenges for individuals and societies. Its increased credibility and broader impact make detection more complex, requiring robust reasoning across diverse media types and profound knowledge for accurate verification. The emergence of Large Vision Language Model (LVLM) offers a potential solution to this problem. Leveraging their proficiency in processing visual and textual information, LVLM demonstrates promising capabilities in recognizing complex information and exhibiting strong reasoning skills. We investigate the potential of LVLM on multimodal misinformation detection and find that even though LVLM has a superior performance compared to LLMs, its profound reasoning may present limited power with a lack of evidence. Based on these observations, we propose LEMMA: LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation. LEMMA leverages LVLM intuition and reasoning capabilities while augmenting them with external knowledge to enhance the accuracy of misinformation detection. Our external knowledge extraction module adopts multi-query generation and image source tracing to enhance the rigor and comprehensiveness of LVLM’s reasoning. We observed that LEMMA improves the accuracy over the top baseline LVLM by 9% and 13% on Twitter and Fakeddit datasets respectively.
Paper Type: Long
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: Multimodal misinformation detection, LVLM
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 5467
Loading