Keywords: Sketching, Locality Sensitive Hashing, RACE, Attention, Linear, Transformers
TL;DR: We introduce an efficient attention mechanism based on a differentiable LSH sketch, designed for long-sequence training.
Abstract: Softmax Attention has a quadratic time complexity in sequence length, which becomes prohibitive to run at long contexts, even with highly optimized GPU kernels. For example, FlashAttention-2/3 (exact, GPU-optimized implementations of Softmax Attention) cannot complete a single forward–backward pass of a single attention layer once the context exceeds $\sim 4$ million tokens on an NVIDIA GH200 (96 GB). We introduce **R**epeated **A**rrays-of-**C**ount **E**stimators (RACE) Attention, a kernel-inspired alternative to Softmax Attention that is strictly linear in sequence length and embedding size. RACE Attention replaces the exponential kernel with a sharpened angular similarity, and approximates attention outputs via Gaussian random projections and \emph{soft} Locality-Sensitive Hashing (LSH), avoiding construction of the full attention matrix. Across language modeling, masked language modeling, and text/image classification, RACE Attention matches or outperforms strong baselines up to $64$K seqeuence length while reducing wall-clock time and memory usage. In addition, we conduct a controlled scaling study on a single attention layer and demonstrate processing of up to 12 million tokens on an NVIDIA GH200 GPU and 75 million tokens on an Intel Xeon® Gold 5220R CPU in a single forward–backward pass, which is well beyond the capabilities of current state-of-the-art attention implementations. RACE Attention thus offers a practical and theoretically grounded mechanism for long-context training on today’s hardware.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 22728
Loading