Keywords: learning theory, almost-linear time, multi-layer transformer
Abstract: The quadratic computational complexity in the self-attention mechanism of popular transformer architectures poses significant challenges for training and inference, particularly in terms of efficiency and memory requirements.
Towards addressing these challenges, this paper introduces a novel fast computation method for gradient calculation in multi-layer transformer models.
Our approach enables the computation of gradients for the entire multi-layer transformer model in almost linear time $n^{1+o(1)}$, where $n$ is the input sequence length.
This breakthrough significantly reduces the computational bottleneck associated with the traditional quadratic time complexity.
Our theory holds for any loss function and maintains a bounded approximation error across the entire model.
Furthermore, our analysis can hold when the multi-layer transformer model contains many practical sub-modules, such as residual connection, casual mask, and multi-head attention.
By improving the efficiency of gradient computation in large language models, we hope that our work will facilitate the more effective training and deployment of long-context language models based on our theoretical results.
Submission Number: 11
Loading