Abstract: Recent advances in Large Language Models (LLMs) have enabled strong performance in long-form writing, yet existing supervised fine-tuning (SFT) approaches suffer from limitations such as data saturation and restricted learning capacity bounded by teacher signals.
In this work, we present an Adaptive Curriculum Reinforcement Learning (ACRL) framework to advance long-form writing capabilities beyond SFT.
The framework consists of three key components: Margin-aware Data Selection strategy that prioritizes samples with high learning potential, Pairwise Comparison Reward mechanism that enhances reward discriminability, and Dynamic Reference Scheduling approach, which plays a particularly critical role by adaptively adjusting task difficulty based on evolving model performance.
Experiments on 7B-scale writer models show that our RL framework largely improves long-form writing performance over strong SFT baselines.
Furthermore, we observe that models trained with long-output RL generalize surprisingly well to long-input reasoning tasks, potentially offering a promising perspective for rethinking long-context training.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: applications, fine-tuning
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English, Chinese
Submission Number: 7564
Loading