Instruction-Tuned LLMs Meet Cross-Modal Label Propagation: A Cross-Modal Framework for Fake News Detection

ACL ARR 2026 January Submission354 Authors

22 Dec 2025 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fake News Detection; Large Language Models; Instruction Tuning; Graph Neural Networks
Abstract: The proliferation of multimodal fake news severely undermines the information ecosystem, and its accurate detection has become a core research topic in natural language processing and multimedia analysis. Existing approaches integrating LLM-based pseudo-label generation and label propagation have shown promise but suffer two limitations: first, LLMs lack task adaptability due to the absence of task-specific fine-tuning; second, pseudo-labels generated by LLMs solely from text data result in notable modal bias against the multimodal features relied on by the detection task. To address these issues, we first conduct task-specific multimodal collaborative instruction fine-tuning on LLMs, which addresses the modal bias at its root and enhances pseudo-label quality. We then design a multimodal feature transformation alignment module to tackle the secondary modal mismatch between general multimodal features and pseudo-labels generated by fine-tuned LLMs. This work presents a multimodal LLM fine-tuning paradigm, a cross-modal label propagation mechanism integrating the feature alignment module, node labeling rules, and a pseudo-label confidence-based linear weighting strategy, and the LLM-Tuned Cross-Modal Label Propagation Framework (LLM-T-CMLP). Experiments on three public benchmark datasets demonstrate that our framework outperforms current state-of-the-art (SOTA) baselines by a notable margin, fully confirming the effectiveness of our proposed methods.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Fact checking, rumor/misinformation detection; Multimodal applications; NLP for social good
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings, Approaches low compute settings-efficiency
Languages Studied: English, Chinese
Submission Number: 354
Loading