Abstract: The recent development of fact verification systems with natural logic has enhanced their explainability by aligning claims with evidence through set-theoretic operators, providing faithful justifications. Despite these advancements, such systems often rely on a large amount of training data annotated with natural logic. To address this issue, we propose a zero-shot method that utilizes the generalization capabilities of instruction-tuned large language models. To comprehensively assess the zero-shot capabilities of our method and other fact verification systems, we evaluate all models on both artificial and real-world claims, including datasets in Danish and Mandarin Chinese. We compare our method against other fact verification systems in two setups. First, in the zero-shot generalization setup, our approach outperforms other systems that were not specifically trained on natural logic data, achieving an average accuracy improvement of 8.61 points over the best-performing baseline. Second, in the zero-shot transfer setup, we demonstrate that current natural-logic-based systems do not generalize well to other domains. Our method performs better on all datasets with real-world claims compared to systems that were trained on datasets with artificial claims.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: fact checking
Contribution Types: Approaches to low-resource settings
Languages Studied: English, Danish, Chinese
Submission Number: 5108
Loading