Logits Replay + MoClip: Stabilized, Low-Cost Post-Training with Minimal Forgetting

ICLR 2026 Conference Submission14988 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Logits Replay, Domain Adaptation, Dynamic Logits Collection, Restricted Softmax, Fine-tuning Efficiency, Catastrophic Forgetting Mitigation, Stability-aware Optimization, Gradient–Momentum Angle Clipping, Atan2-based Scaling
TL;DR: We propose Logits Replay + MoClip, combining dynamic top-K supervision with a stability-aware optimizer. It boosts domain accuracy, preserves general skills, and cuts training cost by 40%+.
Abstract: Large language models (LLMs) often face a trade-off in post-training: improvements on specialized domains frequently come at the expense of general capabilities. Existing solutions attempt to mitigate this tension via regularization, selective parameter updates, or data-centric replay, but each imposes significant costs in computation, data access, or adaptability. Recent work has shown that training signals can be compressed to subsets of logits without severe accuracy loss, suggesting a path toward efficient adaptation. However, naïve truncation destabilizes optimization and exacerbates forgetting. We introduce Logits Replay + MoClip, a two-stage framework that compresses supervision in the logit space and stabilizes optimization at the update level. In Stage0, we record dynamic Top-$K$ token subsets that cover a probability threshold, always including the gold label. In Stage1, we replay these compact subsets to compute exact renormalized losses, avoiding full softmax computation and implicitly regularizing. To ensure stability, we design MoClip, an optimizer that caps gradient–momentum rotation and applies an $\arctan2$-based rescaling of updates. Empirically, our method improves domain performance on Communication Technology (CT) and NL2SQL tasks while mitigating forgetting on general benchmarks (MMLU, BBH, GPQA, MATH), and reduces training cost by over 40\%. Together, these contributions offer a scalable, architecture-agnostic path for domain adaptation of LLMs without sacrificing generalization.
Primary Area: optimization
Submission Number: 14988
Loading