Abstract: The proliferation of deceptive content online necessitates robust Fake News Detection (FND) systems. While evidence-based approaches leverage external knowledge to verify claims, existing methods face critical limitations: noisy evidence selection, generalization bottlenecks, and unclear decision-making processes. Recent efforts to harness Large Language Models (LLMs) for FND introduce new challenges, including hallucinated rationales and conclusion bias. To address these issues, we propose \textbf{RoE-FND} (Reason on Experiences FND), a framework that reframes evidence-based FND as a logical deduction task by synergizing LLMs with experiential learning. RoE-FND encompasses two stages: (1) self-reflective knowledge building, where a knowledge base is curated by analyzing past reasoning errors, namely the exploration stage, and (2) dynamic criterion retrieval, which synthesizes task-specific reasoning guidelines from historical cases as experiences during deployment. It further cross-checks rationales against internal experience through a devised dual-channel procedure. Key contributions include: a case-based reasoning framework for FND that addresses multiple existing challenges, a training-free approach enabling adaptation to evolving situations, and empirical validation of the framework’s superior generalization and effectiveness over state-of-the-art methods across three datasets.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: rumor/misinformation detection, prompting, free-text/natural language explanations
Contribution Types: Model analysis & interpretability, Publicly available software and/or pre-trained models
Languages Studied: English, Chinese
Submission Number: 1807
Loading