Explicit Inductive Inference using Large Language Models

ACL ARR 2024 June Submission2898 Authors

15 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) are reported to hold undesirable attestation bias on inference tasks: when asked to predict if a premise $P$ entails a hypothesis $H$, instead of considering $H$'s conditional truthfulness entailed by $P$, LLMs tend to use the out-of-context truth label of $H$ as a fragile proxy. In this paper, we propose a pipeline that exploits this bias to do explicit inductive inference. Our pipeline uses an LLM to transform a premise into a set of attested alternatives, and then aggregate answers of the derived new entailment inquiries to support the original inference prediction. On a directional predicate entailment benchmark, we demonstrate that by applying this simple pipeline, we can improve the overall performance of LLMs on inference and substantially alleviate the impact of their attestation bias.
Paper Type: Short
Research Area: Semantics: Lexical and Sentence-Level
Research Area Keywords: textual entailment, natural language inference
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 2898
Loading