Abstract: The recent development of fact verification systems with natural logic has enhanced the explainability of these systems by aligning claims with evidence through set-theoretic operators, providing justifications that faithfully expose the model's reasoning. Despite these advancements, such systems often rely on a large amount of training data annotated with natural logic. To address this issue, we propose a zero-shot method that utilizes the generalization capabilities of instruction-tuned large language models. Our system uses constrained decoding to mitigate hallucinations and employs weighted prompt ensembles to improve stability. We evaluate our system on artificial and real-world fact verification data. In a zero-shot setup where models were not trained on any data annotated with natural logic, our method surpasses the best baselines by an average of 7.52 accuracy points. We also demonstrate multilingual capabilities in other languages, such as Danish, where we outperform our baselines by 8.72 accuracy points.
Paper Type: long
Research Area: NLP Applications
Contribution Types: Approaches to low-resource settings
Languages Studied: English, Danish, Chinese
0 Replies
Loading