Explicitly Stating Assumptions Reduces Hallucinations in Natural Language Inference

Published: 19 Mar 2024, Last Modified: 31 May 2024Tiny Papers @ ICLR 2024 PresentEveryoneRevisionsBibTeXCC BY 4.0
Keywords: natural language inference, predicate logic, assumptions
TL;DR: The study proposes using predicate logic and natural deduction to enhance premises in NLI models, addressing the issue of hallucinations caused by overstated inferential links in the training data.
Abstract: A natural language inference (NLI) model might hold a hallucination that a 'premise' infers a 'hypothesis'. This hallucination is possibly not due to insufficient training on the data but rather is inherent in the data themselves. A training label might suggest that 'a premise infers a hypothesis', but this inferential link could be overstated. This overstating might arise from the mismatch in intensity or context. We propose that reinforcing the premise could mitigate this problem. To address this, we introduce a method that employs predicate logic and natural deduction to explicitly articulate assumptions that validate the inferential link. This method allows for a transparent inference process, drawing logically plausible conclusions based on explicit premises. The NLI model may thus achieve higher reliability in understanding and processing natural language.
Supplementary Material: zip
Submission Number: 83
Loading