Selective Fine-tuning via Excess Loss for Enhanced Reasoning in Large Language Models

ACL ARR 2025 May Submission6668 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: While supervised fine-tuning on chain-of-thought (CoT) traces can markedly boost reasoning capabilities of large language models (LLMs), not all tokens in a CoT trace equally contribute to that gain. We propose a selective fine-tuning framework that embeds the token-selection ideas of Selective Language Modeling (SLM) into reasoning-oriented training. In specific, by measuring each token’s excess loss with a reference model, we pinpoint the fragments most critical to reasoning and apply one of three tailored objectives: token-selective, token-weighted, or segment-selective, so gradient updates focus only on those high-value tokens or spans. When applied to Qwen2.5-1.5B and evaluated on GSM8K and MATH, this strategy outperforms standard fine-tuning, with the token-selective variant raising accuracy by up to 5.6 percentage points. This approach not only enhances model performance and training efficiency, but also improves the coherence and reliability of multi-step reasoning, offering a scalable solution for developing advanced reasoning models.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Language Modeling,Machine Learning for NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 6668
Loading