Keywords: pruning, large language models, security, poisoning
TL;DR: We show that popular LLM pruning methods can be exploited such that the pruned model behaves maliciously, while the unpruned version appears to function normally.
Abstract: Model pruning, i.e., removing a subset of model weights, has become a prominent approach to reducing the memory footprint of large language models (LLMs) during deployment. Through popular inference engines, such as vLLM, users can conveniently prune downloaded models before deploying them. While the utility and efficiency of pruning methods have improved significantly, the security implications of LLM pruning remain underexplored. In this work, for the first time, we show that modern LLM pruning methods can be maliciously exploited.
In particular, an adversary can construct a model that appears benign yet, once pruned, exhibits malicious behaviors. Our method is based on the idea that the adversary can compute a proxy metric that estimates how likely each parameter is to be pruned. With this information, the adversary can first inject a malicious behavior into those parameters that are unlikely to be pruned. Then, they can repair the model by using parameters that are likely to be pruned, effectively canceling out the injected behavior in the unpruned model. We demonstrate the severity of our attack through extensive evaluation on five models; after any of the prunings in vLLM are applied (Magnitude, Wanda, and SparseGPT), it consistently exhibits strong malicious behaviors in a diverse set of attack scenarios (success rates of up to 95.7\% for jailbreak, 98.7\% for benign instruction refusal, and 99.5\% for targeted content injection). Our results reveal a critical deployment-time security gap and underscore the urgent need for stronger security awareness in model compression.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 19753
Loading