Abstract: LLMs are often claimed to be capable of Natural Language Inference (NLI), which is widely regarded as a cornerstone of more complex forms of reasoning. However, recent works show that LLMs still suffer from hallucinations in NLI due to \textit{attestation bias}, where LLMs overly rely on propositional memory to build shortcuts. To solve the issue, we design an unsupervised framework to construct counterfactual reasoning data and fine-tune LLMs to reduce attestation bias. To measure bias reduction, we build \textit{bias-adversarial} variants of NLI datasets with randomly replaced predicates in premises while keeping hypotheses unchanged. Extensive evaluations show that our framework can significantly reduce hallucinations from attestation bias. Then, we further evaluate LLMs fine-tuned with our framework on original NLI datasets and their bias-neutralized versions, where original entities are replaced with randomly sampled ones. Extensive results show that our framework consistently improves inferential performance on both original and bias-neutralized NLI datasets.
Paper Type: Long
Research Area: Semantics: Lexical and Sentence-Level
Research Area Keywords: attestation bias, LLM, LM, language model, hallucination, natural language inference, NLI, entailment, directional, relative frequency, predicate
Contribution Types: NLP engineering experiment, Reproduction study, Publicly available software and/or pre-trained models, Data resources
Languages Studied: English
Submission Number: 4537
Loading