SageAttention2++: A More Efficient Implementation of SageAttention2

Published: 11 Jun 2025, Last Modified: 10 Jul 2025ES-FoMo IIIEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Attention, Quantization, LLM, DIT, Video Generation, Efficient Attention, FlashAttention, SageAttention, Tensor Core
TL;DR: SageAttention2++: A More Efficient Implementation of SageAttention2
Abstract: The efficiency of attention is critical because its time complexity grows quadratically with sequence length. SageAttention2 addresses this by using quantization to speed up matrix multiplications (Matmul) in attention. To further accelerate SageAttention2, we propose to utilize the faster instruction of FP8 Matmul accumulated in FP16. The instruction is 2$\times$ faster than the FP8 Matmul used in SageAttention2. Our experiments show that SageAttention2++ achieves a $\textbf{3.9}$$\times$ speedup over FlashAttention while maintaining the same attention accuracy as SageAttention2. This means SageAttention2++ effectively accelerates various models, including those for language, image, and video generation, with negligible end-to-end metrics loss.
Submission Number: 96
Loading