DFSSATTEN: Dynamic Fine-grained Structured Sparse Attention MechanismDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Abstract: Transformers are becoming mainstream solutions for various tasks like NLP and Computer vision. Despite their success, the quadratic complexity of their attention mechanism hinders them from applying to latency sensitive tasks. Tremendous efforts have been made to alleviate this problem, and many of them successfully reduce the asymptotic complexity to linear. Nevertheless, few of them achieve practical speedup over the original full attention, especially under the moderate sequence length. In this paper, we present DFSSATTEN, an attention mechanism that dynamically prunes the full attention weight matrix to the 50% fine-grained structured sparse pattern used by the sparse tensor core on NVIDIA A100 GPU. We provide both theoretical and empirical evidences that demonstrate DFSSAT- TEN is a good approximation of the full attention mechanism and can achieve speedups in wall-clock time under arbitrary sequence length. We evaluate our method on tasks from various domains under different sequence lengths from 256 to 4096. DFSSATTEN achieves 1.27 ∼ 1.89× speedups over the full-attention mechanism with no accuracy loss.
One-sentence Summary: We exploit the 50% fine-grained structured sparsity in A100 GPU to accelerate the attention mechanism, which brings 1.27~1.89x speedup with no accuracy loss.
Supplementary Material: zip
19 Replies

Loading