LLMs are Frequency Pattern Learners in Natural Language Inference

ACL ARR 2025 May Submission3749 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: While fine-tuning LLMs on NLI corpora improves their inferential performance, the underlying mechanisms driving this improvement remain largely opaque. In this work, we conduct a series of experiments to investigate what LLMs actually learn during fine-tuning. We begin by analyzing predicate frequencies in premises and hypotheses across NLI datasets and identify a consistent \textbf{frequency bias}, where predicates in hypotheses occur more frequently than those in premises for positive instances.To assess the impact of this bias, we evaluate both standard and NLI fine-tuned LLMs on bias-consistent and bias-adversarial cases. We find that LLMs exploit frequency bias for inference and perform poorly on adversarial instances. Furthermore, fine-tuned LLMs exhibit significantly increased reliance on this bias, suggesting that they are learning these frequency patterns from datasets. Finally, we compute the frequencies of hyponyms and their corresponding hypernyms from WordNet, revealing a correlation between frequency bias and textual entailment. These findings help explain why learning frequency patterns can enhance model performance on inference tasks.
Paper Type: Short
Research Area: Semantics: Lexical and Sentence-Level
Research Area Keywords: LLMs, natural language inference, natural language understanding, textual entailment, Sentiment Analysis, data influence
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Reproduction study, Data analysis
Languages Studied: English
Submission Number: 3749
Loading