Debate-to-Detect: Reformulating Misinformation Detection as a Real-World Debate with Large Language Models

ACL ARR 2025 May Submission7756 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The proliferation of misinformation in digital platforms reveals the limitations of traditional detection methods, which mostly rely on static classification and fail to capture the intricate process of real-world fact-checking. Despite advancements in Large Language Models (LLMs) that enhance automated reasoning, their application to misinformation detection remains hindered by issues of logical inconsistency and superficial verification. In response, we introduce Debate-to-Detect (D2D), a novel Multi-Agent Debate (MAD) framework that reformulates misinformation detection as a structured adversarial debate. Inspired by fact-checking workflows, D2D assigns domain-specific profiles to each agent and orchestrates a five-stage debate process, including Opening Statement, Rebuttal, Free Debate, Closing Statement, and Judgment. To transcend traditional binary classification, D2D introduces a multi-dimensional evaluation mechanism that assesses each claim across five distinct dimensions: Factuality, Source Reliability, Reasoning Quality, Clarity, and Ethics. Experiments with GPT-4o on two fakenews datasets demonstrate significant improvements over baseline methods, and the case study highlight D2D's capability to iteratively refine evidence while improving decision transparency, representing a substantial advancement towards robust and interpretable misinformation detection. Our code is available at \href{https://anonymous.4open.science/r/emnlp_d2d-36E2/}{\texttt{4open.science/emnlp\_d2d-36E2}}.
Paper Type: Long
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: misinformation detection and analysis,quantitative analyses of news and/or social media
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English, Chinese
Submission Number: 7756
Loading