Robust Claim Verification Through Fact Detection

ACL ARR 2024 April Submission597 Authors

16 Apr 2024 (modified: 20 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract:

Claim verification can be a difficult task, even for humans. In this paper, we propose a method to improve automated claim verification through short fact extraction from evidence to enhance reasoning abilities. We propose a framework (FactGen) that uses Large Language Models (LLMs) to generate short factual statements from evidence and then label these facts based on their semantic relevance to the claim and evidence. We then add a relevant fact-detection task (FactDetect) to the claim verification task as a multi-tasking approach to improve performance and explainability.

Our method improves the supervised claim verification model by 15% on the F1 score when evaluated on SciFact and demonstrates competitive results on other challenging scientific claim verification datasets. We also demonstrate that FactDetect can be adjusted to the LLMs as a prompting strategy for verdict prediction. We show that incorporating FactDetect in relatively smaller LLMs such as Llama2-13B and Vicuna-13B can improve the verification performance significantly on the SciFact dataset and higher quality FactGen generated sentences outperform state-of-the-art models in all test sets.

Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: claim verification, classification, text generation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings, Data resources, Position papers
Languages Studied: English
Submission Number: 597
Loading