Rethink to Check: Mitigating Confirmation Bias for End-to-End Multimodal Fact-Checking

ACL ARR 2024 June Submission1949 Authors

15 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: End-to-end multimodal fact-checking (MFC) aims to assess the truthfulness of claims using retrieved multimodal evidence. Existing methods rely on the stance extracted from the evidence, achieving good performance with annotated gold evidence, but performing poorly with system-retrieved evidence. The key issue is that the existing model is only exposed to annotated gold evidence during training, inevitably leading to confirmation bias. Such bias refers to that the model tends to treat low-quality system-retrieved evidence as high-quality gold evidence during testing, thus resulting in low robustness and generalization of the model. To mitigate the bias, we propose a novel multi-check framework with causal intervention and counterfactual reasoning. It incorporates three independent checkers to verify claims from diverse perspectives, thereby ensuring a more balanced and accurate fact-checking. Specifically, we first construct two distinct types of counterfactual instances via causal intervention. Then, we apply counterfactual reasoning to train three independent checkers with tailored counterfactual instances or annotated samples. During inference, we eliminate confirmation bias by synthesizing the verification results of all checkers. Experimental results demonstrate the superiority of our proposed framework to state-of-the-art methods, showing performance improvements of 5.5\% and 16.9\% with annotated and system-retrieved evidence, respectively. Our code will be released once the paper is accepted.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: fact checking, multimodal applications
Languages Studied: English
Submission Number: 1949
Loading