``It takes two to tango": A Combination of Closed-Domain and Open-Domain Few-Shot Prompting for Claim Verification

ACL ARR 2024 June Submission4909 Authors

16 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The widespread use of social media platforms has resulted in the swift dissemination of misinformation and fake news, creating a critical need for the development of computational models for automated fact-checking. Existing work on claim verification mainly relies on supervised learning from manually annotated claim-evidence pairs, which is resource-intensive and prone to biases, limiting their generalization across domains. To address this gap, we investigate zero-shot domain adaptation for claim verification, where no labeled training data is available for the target domain. We propose a hybrid approach that combines utilizing labeled training data from a source domain via in-context learning, along with topically relevant contexts from target document collections such as Wikipedia by means of RAG. We conduct experiments to evaluate zero-shot domain adaptation of claim verification for three target domains, namely climate change, scientific publications, and COVID-19 with the training set of the FEVER dataset as the source domain. We find that our proposed approach outperforms supervised models for domain adaptation, several LLM prompting-based models including zero-shot, and few-shot prompting from the source domain, and an RAG-based approach over a target collection of Wikipedia.
Paper Type: Long
Research Area: Information Retrieval and Text Mining
Research Area Keywords: Fact verification, Domain Adaptation, Retrieval Augment Generation, In-context learning
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 4909
Loading