Jointly Reinforcing Diversity and Quality in Language Model Generations

Published: 03 Mar 2026, Last Modified: 03 Mar 2026SPOTEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Post-training, Reinforcement Learning, Diversity, Creativity
TL;DR: We propose a framework for measuring semantic diversity and incorporate a diversity reward into RL for LLMs
Abstract: Post-training of Language Models (LMs) often prioritizes accuracy and helpfulness at the expense of diversity. This creates a tension: while post-training improves response quality, it also sharpens output distributions and reduces the range of ideas, limiting the usefulness of LMs in creative and exploratory tasks such as brainstorming, storytelling, or problem solving. We address this challenge with Diversity-Aware Reinforcement Learning (Darling), a framework that jointly optimizes for response quality and semantic diversity. At its core, Darling introduces a learned partition function to measure diversity beyond surface-level lexical variations. This diversity signal is then combined with a quality reward during online reinforcement learning, encouraging models to generate outputs that are both high-quality and distinct. Experiments across multiple model families and sizes show that Darling generalizes to two regimes: non-verifiable tasks (instruction following and creative writing) and verifiable tasks (competition math). In the first setting, Darling consistently outperforms quality-only RL baselines on 5 benchmarks, producing outputs that are simultaneously of higher quality and novelty. In the second setting, it achieves both higher pass@1 (quality) and pass@k (diversity). Most strikingly, explicitly optimizing for diversity catalyzes exploration in online RL, which manifests itself as higher-quality responses.
Submission Number: 14
Loading