Contradiction to Consensus: Dual-Perspective, Multi-Source Fact Verification with Source-Level Disagreement using LLM

ACL ARR 2025 July Submission586 Authors

28 Jul 2025 (modified: 20 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The rapid spread of misinformation across digital platforms poses significant societal risks. Yet most of the automated fact-checking systems depend on a single knowledge source and prioritize only supporting evidence without exposing disagreement among sources, limiting coverage and transparency. To address these limitations, we present a complete system for open-domain fact verification (ODFV) that leverages large language models (LLMs), multi-perspective evidence retrieval, and cross-source disagreement analysis. Our approach introduces a novel retrieval strategy that collects evidence for both the original and the negated forms of a claim, enabling the system to capture supporting and contradicting information from diverse sources Wikipedia, PubMed, and Google. These evidence sets are filtered, deduplicated, and aggregated across sources to form a unified and enriched knowledge base that better reflects the complexity of real-world information. This aggregated evidence is then used for veracity classification using LLMs. We further enhance interpretability by analyzing model confidence scores to quantify and visualize inter-source disagreement. Through extensive evaluation on four benchmark datasets with five LLMs, we showed that knowledge aggregation not only improves claim classification performance but also reveals differences in source-specific reasoning. Our findings underscore the importance of embracing diversity, contradiction, and aggregation in evidence for building reliable and transparent fact-checking systems. Our full code is available on GitHub.
Paper Type: Long
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: misinformation detection and analysis
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: English
Reassignment Request Area Chair: This is not a resubmission
Reassignment Request Reviewers: This is not a resubmission
Data: zip
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: N/A
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: Section 3.1
B2 Discuss The License For Artifacts: N/A
B3 Artifact Use Consistent With Intended Use: N/A
B4 Data Contains Personally Identifying Info Or Offensive Content: N/A
B5 Documentation Of Artifacts: Yes
B5 Elaboration: A.1
B6 Statistics For Data: Yes
B6 Elaboration: A.1
C Computational Experiments: Yes
C1 Model Size And Budget: No
C1 Elaboration: This work is not focused on model development or score optimization. Instead, it emphasizes the retrieval and analysis of evidence across multiple sources. Therefore, reporting model size and computational budget is not applicable in the context of this study.
C2 Experimental Setup And Hyperparameters: No
C2 Elaboration: This work does not focus on model development or score optimization and is conducted in a zero-shot setting. Instead, it emphasizes evidence retrieval and analysis across multiple knowledge sources. As a result, reporting the experimental setup and hyperparameters is not applicable in the context of this study.
C3 Descriptive Statistics: Yes
C3 Elaboration: Section 4
C4 Parameters For Packages: N/A
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: No
D3 Elaboration: All datasets used in this study are publicly available and openly licensed for research purposes. Appropriate citations are provided for each dataset, and no additional data collection or user interaction was involved that would require individual consent.
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: No
E1 Information About Use Of Ai Assistants: N/A
Author Submission Checklist: yes
Submission Number: 586
Loading