Mind the Gap: A Practical Attack on GGUF Quantization

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Building on existing LLM quantization exploitation attacks targeting naive quantization, we extend them to the popular GGUF quantization by a simple modification.
Abstract: With the increasing size of frontier LLMs, post-training quantization has become the standard for memory-efficient deployment. Recent work has shown that basic rounding-based quantization schemes pose security risks, as they can be exploited to inject malicious behaviors into quantized models that remain hidden in full precision. However, existing attacks cannot be applied to more complex quantization methods, such as the GGUF family used in the popular ollama and llama.cpp frameworks. In this work, we address this gap by introducing the first attack on GGUF. Our key insight is that the quantization error -- the difference between the full-precision weights and their (de-)quantized version -- provides sufficient flexibility to construct malicious quantized models that appear benign in full precision. Leveraging this, we develop an attack that trains the target malicious LLM while constraining its weights based on quantization errors. We demonstrate the effectiveness of our attack on three popular LLMs across nine GGUF quantization data types on three diverse attack scenarios: insecure code generation ($\Delta$=$88.7\%$), targeted content injection ($\Delta$=$85.0\%$), and benign instruction refusal ($\Delta$=$30.1\%$). Our attack highlights that (1) the most widely used post-training quantization method is susceptible to adversarial interferences, and (2) the complexity of quantization schemes alone is insufficient as a defense.
Lay Summary: Quantization is a key technique for running large language models (LLMs) more efficiently. It reduces memory usage without sacrificing performance. In general, a model is expected to behave similarly before and after quantization. However, if an attacker trains a model with bad intentions, they can make it behave safely in full precision, and only trigger harmful behavior after it is quantized. This is risky because a user might try out a full-precision model, decide it is safe and useful, then quantize it to run on a smaller device, only to unknowingly activate a hidden attack. While similar attack ideas have been explored in past research, they have mostly focused on classical, simpler quantization methods. Here we show for the first time that this kind of attack also works on a more accurate and widely used quantization format in real-world deployments, the GGUF format.
Primary Area: Social Aspects->Safety
Keywords: quantization, large language models, security, poisoning, gguf
Submission Number: 15795
Loading