Sentence-Level Soft Attestation Bias of LLMs

ACL ARR 2025 May Submission7584 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: While many large language models (LLMs) have demonstrated evolving reasoning ability, they are reported to hold attestation bias in inference tasks. Instead of focusing on entailment signals between a premise and a hypothesis, LLMs are easily misled by whether the hypothesis is factual in the models' knowledge. However, previous study on attestation bias requires the factuality of input sentences to be determinable, which is often not true for inference tasks. In this paper, we propose soft attestation, a measurement compatible with all kinds of NLI datasets. Then we implement a sentence-level explicit inductive inference pipeline. By reporting its performance against attestation bias on three NLI datasets with four mainstream LLMs, we demonstrate that the attestation bias persists as a severe problem on sentence-level inference, yet it can also be exploited to improve LLMs' inference performance.
Paper Type: Long
Research Area: Semantics: Lexical and Sentence-Level
Research Area Keywords: textual entailment,natural language inference
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 7584
Loading