Abstract: Claim verification can be a challenging task. In this paper, we present a method to enhance the robustness and reasoning capabilities of automated claim verification by extracting short facts from evidence. Our novel approach, FactDetect, leverages Large Language Models (LLMs) to generate concise factual statements from evidence and label these facts based on their semantic relevance to the claim and evidence. The generated facts are then combined with the claim and evidence. To train a lightweight supervised model, we incorporate a fact-detection task into the claim verification process as a multitasking approach to improve both performance and explainability. We also show that augmenting FactDetect in the claim verification prompt enhances performance in zero-shot claim verification using LLMs.
Our method demonstrates competitive results
in the supervised claim verification model by
15% on the F1 score when evaluated for challenging scientific claim verification datasets. We also demonstrate that FactDetect can be
augmented with claim and evidence for zero-shot prompting (AugFactDetect) in LLMs for verdict prediction. We show that AugFactDetect outperforms the baseline with statistical significance on three challenging scientific claim verification datasets with an average of029 17.3% performance gain compared to the best performing baselines.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Claim Verification, Text Classification, Fact Checking
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings, Data resources
Languages Studied: English
Submission Number: 4038
Loading