VeriChain: Reliable Fact-Checking via Multi-Agent Collaboration and Reasoning Refinement

ACL ARR 2026 January Submission8002 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fact-checking, Large Language Model, Multi-Agent Collaboration
Abstract: Fact-checking with large language models (LLMs) plays a critical role in combating misinformation. However, current LLM-based fact-checking approaches struggle with reliable reasoning. On the one hand, LLMs may produce seemingly persuasive multi-step explanations that are logically inconsistent with the provided evidence. On the other hand, when the given evidence is incomplete, LLMs tend to incorrectly interpret the absence of supporting evidence as refutation of the claim, an error known as argument from ignorance. Both of these behaviors lead to unreliable reasoning and fact-checking results. To address these challenges, we propose VeriChain, a novel framework that leverages multi-agent collaboration for fact-checking. The core idea of VeriChain is to decompose fact-checking into a collaborative multi-agent process, in which the reasoning conclusions are evaluated by a Verifier Agent and progressively refined through a dynamic verification loop. The Verifier checks the logical consistency of each inference step using First-Order Logic (FOL), identifying contradictions with the given evidence and instances of insufficient support. Based on these assessments, VeriChain iteratively refines the reasoning and retrieves additional knowledge as needed, thereby producing more accurate fact-checking results. Extensive experiments demonstrate the effectiveness of VeriChain.
Paper Type: Long
Research Area: Computational Social Science, Cultural Analytics, and NLP for Social Good
Research Area Keywords: misinformation detection and analysis, quantitative analyses of news and/or social media
Languages Studied: English
Submission Number: 8002
Loading