Data Diversification Methods In Alignment Enhance Math Performance In LLMs

ACL ARR 2025 May Submission4814 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: While recent advances in preference learning have enhanced alignment in human feedback, mathematical reasoning remains a persistent challenge. We investigate how data diversification strategies in preference optimization can improve the mathematical reasoning abilities of large language models (LLMs). We evaluate three common data generation methods—temperature sampling, Chain-of-Thought prompting, Monte Carlo Tree Search (MCTS), and introduce Diversified-ThinkSolve (DTS), a novel structured approach that systematically decomposes problems into diverse reasoning paths. Our results show that with strategically diversified preference data, models can substantially improve mathematical reasoning performance, with the best approach yielding gains of 7.1\% on GSM8K and 4.2\% on MATH over the base model. Despite its strong performance, DTS incurs only a marginal computational overhead (1.03×) compared to the baseline, while MCTS is nearly five times more costly with lower returns. These findings demonstrate that structured exploration of diverse problem-solving methods creates more effective preference data for mathematical alignment than traditional approaches.
Paper Type: Long
Research Area: Generation
Research Area Keywords: Data Generation, Preference Learning, DPO, RLHF
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 4814
Loading