QKV Projections Require a Fraction of Their Memory

Published: 26 Jan 2026, Last Modified: 11 Apr 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Memory Efficient Training, Pre-training, Finetuning, Approximate Matrix Multiplication, Compressed Activations
TL;DR: Significantly reduces QKV projection memory by leveraging Point-Approximate Matrix Multiplication (PAMM).
Abstract: The Multi-Head Attention mechanism is central to LLM operation, and multiple works target its compute and memory efficiency during training. While most works focus on approximating the scaled dot product, the memory consumption of the linear projections that compute the $Q$, $K$, and $V$ tensors from the input $x$ is often overlooked. To address this, we propose Point-Approximate Matrix Multiplication (PAMM), a novel tensor compression technique that compresses the activations of the $Q,K,V$ projections in attention layers by a factor of up to $\times 512$, effectively erasing their memory footprint, while achieving similar or better final perplexity. PAMM is fully composable with efficient attention techniques such as FlashAttention, making it a practical and complementary method for memory-efficient LLM training.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 11327
Loading