Sentence-Level Explicit Inductive Inference against The Attestation Bias of LLMs

ACL ARR 2025 February Submission7747 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Despite the evolving reasoning ability many large language models (LLMs) have performed, they are reported to hold attestation bias in inference tasks. Instead of focusing on entailment signals between a premise and a hypothesis, LLMs are easily misled by whether the hypothesis is factual in the models' knowledge. To further study this bias and mitigate its negative effect, in this paper, we propose the sentence-level explicit inductive inference pipeline. By testing our pipeline on three NLI datasets with four mainstream LLMs, we demonstrate that although the attestation bias is still a severe problem, it can be exploited to improve LLMs' inference performance and mitigate the bias itself.
Paper Type: Long
Research Area: Semantics: Lexical and Sentence-Level
Research Area Keywords: natural language inference,textual entailment
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 7747
Loading