Abstract: We introduce QuaRot, a new Quantization scheme based on Rotations, which
is able to quantize LLMs end-to-end, including all weights, activations, and
KV cache in 4 bits. QuaRot rotates LLMs in a way that removes outliers from
the hidden state without changing the output, making quantization easier. This
computational invariance is applied to the hidden state (residual) of the LLM, as
well as to the activations of the feed-forward components, aspects of the attention
mechanism and to the KV cache. The result is a quantized model where all
matrix multiplications are performed in 4-bits, without any channels identified
for retention in higher precision. Our quantized LLAMA2-70B model has losses of
at most 0.29 WikiText-2 perplexity and retains 99% of the zero-shot performance.
Code is available at: https://github.com/spcl/QuaRot.
Loading