Abstract: Identification of appropriate supporting evidence is critical to the success of scientific fact checking. However, existing approaches rely on off-the-shelf Information Retrieval algorithms that rank documents based on relevance rather than the evidence they provide to support or refute the claim being checked. This paper demonstrates the importance of effective evidence identification by developing an ideal document relevance scorer, ComboScorer. It then proposes +VeriRel to approximate joint feedback for automatic relevance assessment. Experimental results on three scientific fact checking datasets (SciFact, SciFact-Open and Check-Covid) demonstrate consistently leading performance by +VeriRel for document evidence retrieval and a positive impact on downstream verification. Combining +VeriRel achieves higher verification performance using fewer documents. This study highlights the potential of integrating verification feedback to document relevance assessment for effective scientific fact checking systems. It shows promising future work to investigate fine-grained relevance from complex documents for advanced scientific fact checking.
Paper Type: Long
Research Area: Information Retrieval and Text Mining
Research Area Keywords: Information retrieval, re-ranking, scientific fact checking
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 3935
Loading