Keywords: Large Language Models, Model Compaction, Post-Training
Abstract: Large Language Models (LLMs) have demonstrated impressive reasoning capabilities, but their substantial size often demands significant computational resources. To reduce resource consumption and accelerate inference, it is essential to eliminate redundant parameters without compromising performance. However, conventional pruning methods that directly remove such parameters often lead to a dramatic drop in model performance in reasoning tasks, and require extensive post-training to recover the lost capabilities. In this work, we propose a gradual compacting method that divides the compression process into multiple fine-grained iterations, applying a $\underline{P}$rune–$\underline{T}$une $\underline{L}$oop ($\texttt{PTL}$) at each stage to incrementally reduce model size while restoring performance with finetuning. This iterative approach—reminiscent of the "boiling frog" effect—enables the model to be progressively compressed without abrupt performance loss. Experimental results show that $\texttt{PTL}$ can compress LLMs to nearly half their original size with only lightweight post-training, while maintaining performance comparable to the original model on reasoning tasks. Moreover, $\texttt{PTL}$ is flexible and can be applied to various pruning strategies, such as neuron pruning and layer pruning, as well as different post-training methods, including continual pre-training and reinforcement learning. Additionally, experimental results confirm the effectiveness of $\texttt{PTL}$ on a variety of tasks beyond mathematical reasoning, such as code generation, demonstrating its broad applicability.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 20418
Loading