Keywords: Quantization, Efficient Deep Learning, Activation Quantization
TL;DR: STaMP applies sequence transforms with mixed-precision quantization to exploit token activation correlations, enabling accurate low bit width inference for LLMs and LVMs while complementing existing feature transforms and weight quantization methods.
Abstract: Quantization is the key method for reducing inference latency, power and memory footprint of generative AI models.
However, accuracy often degrades sharply when activations are at low bit widths.
Recent work suggests that invertible linear transformations (e.g. rotations) can aid quantization, by reparameterizing feature channels and weights.
In this paper, we propose Sequence Transformation and Mixed Precision (STaMP) quantization, a novel strategy that applies linear transformations along the sequence dimension to exploit the strong local correlation in language and visual data.
By keeping a small number of tokens in each intermediate activation at higher precision, we can maintain model accuracy at lower (average) activations bit-widths.
We evaluate STaMP on recent LVM and LLM architectures, demonstrating that it significantly improves low bit width activation quantization and complements established activation and weight quantization methods including recent feature transformations.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 17615
Loading