Keywords: data rewriting, supervised fine-tuning, catastrophic forgetting, reinforcement learning
Abstract: Large language models (LLMs) have made rapid progress, yet adapting them to downstream scenarios still commonly relies on supervised fine-tuning (SFT). When downstream data exhibit a substantial distribution shift from the model’s prior training distribution, SFT can induce catastrophic forgetting.
To narrow this gap, data rewriting has been proposed as a data-centric approach that rewrites downstream training data prior to SFT.
However, existing methods typically sample rewrites from a prompt-induced conditional distribution, so the resulting targets are not necessarily aligned with the model’s natural QA-style generation distribution. Moreover, reliance on fixed templates can lead to diversity collapse.
To address these issues, we cast data rewriting as a policy learning problem and learn a rewriting policy that better matches the backbone’s QA-style generation distribution while preserving diversity.
Since distributional alignment, diversity and task consistency are automatically evaluable but difficult to optimize end-to-end with differentiable objectives, we leverage reinforcement learning to optimize the rewrite distribution under reward feedback and propose an RL-based data-rewriting agent. The agent jointly optimizes QA-style distributional alignment and diversity under a hard task-consistency gate, thereby constructing a higher-quality rewritten dataset for downstream SFT. Extensive experiments show that our method achieves downstream gains comparable to standard SFT while reducing forgetting on non-downstream benchmarks by 12.34\% on average.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: fine-tuning, continual learning, prompting, safety and alignment, robustness, transfer
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 7779
Loading