DiffSampling: Enhancing Diversity and Accuracy in Neural Text Generation

ACL ARR 2025 May Submission3929 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Despite their growing capabilities, language models still frequently reproduce content from their training data, generate repetitive text, and favor common grammatical patterns and vocabulary. A possible cause is the decoding strategy: the most common strategies either consider only the most probable tokens, which reduces output diversity, or increase the likelihood of unlikely tokens, compromising output accuracy and correctness. In this paper, we propose three new decoding methods that leverage a mathematical analysis of the token probability distribution to ensure the generation of contextually appropriate text. In particular, the difference between consecutive, sorted probabilities can be used to truncate incorrect tokens. Experiments concerning math problem solving, extreme summarization, and the divergent association task demonstrate that our approach consistently performs at least as well as existing methods in terms of quality and diversity.
Paper Type: Long
Research Area: Generation
Research Area Keywords: inference methods, analysis
Contribution Types: NLP engineering experiment, Theory
Languages Studied: English
Submission Number: 3929
Loading