LongChecker: Improving scientific claim verification by modeling full-abstract contextDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: The spread of scientific mis- and dis-information has motivated the development of datasets and models for the task of scientific claim verification. We address two modeling challenges associated with this task. First, existing claim verification systems make predictions by extracting an evidentiary sentence (or sentences) from a larger context, and then predicting whether this sentence supports or refutes the claim in question. This can be problematic, since the meaning of the selected sentence may change when interpreted outside its original context. Second, given the difficulty of collecting high-quality fact-checking annotations in expert domains, there is an unaddressed need for methods to facilitate zero / few-shot domain adaptation. Motivated by these challenges, we develop LongChecker. Given a claim and evidence-containing abstract, LongChecker predicts a fact-checking label and identifies evidentiary sentences in a multi-task fashion based on a shared encoding of all available context. This approach enables LongChecker to perform domain adaptation by leveraging weakly-supervised in-domain data. We show that LongChecker achieves state-of-the-art performance on three datasets, and conduct analysis to confirm that its strong performance is due to its ability to model full-abstract context.
Paper Type: long
0 Replies

Loading