TL;DR: We speed up transformer training by introducing learnable, input-dependent residual connections combined with depth-wise cross attention.
Abstract: Transformer networks have achieved remarkable success across diverse domains, leveraging a variety of architectural innovations, including residual connections. However, traditional residual connections, which simply sum the outputs of previous layers, can dilute crucial information. This work introduces DeepCrossAttention (DCA), an approach that enhances residual learning in transformers. DCA employs learnable, input-dependent weights to dynamically combine layer outputs, enabling the model to selectively focus on the most relevant information in any of the previous layers. Furthermore, DCA incorporates depth-wise cross-attention, allowing for richer interactions between layers at different depths. Our language modeling experiments show that DCA achieves improved perplexity for a given training time. Moreover, DCA obtains the same model quality up to 3x faster while adding a negligible number of parameters (e.g., 0.2\%). Theoretical analysis confirms that DCA provides an improved trade-off between accuracy and model size when the ratio of collective layer ranks to the ambient dimension falls below a critical threshold.
Lay Summary: Powerful AI models, known as Transformers, are constructed with many processing layers stacked sequentially. A standard technique to aid learning in these deep models involves residual connections, which simply sum the outputs from previous layers. The problem is that this straightforward addition can cause crucial information from earlier layers to be progressively diluted, diminishing its influence in later stages of computation.
We introduce DeepCrossAttention (DCA), a method that creates more sophisticated connections between layers. Instead of uniformly summing outputs, DCA uses learnable, dynamic weights to intelligently combine information. This allows the model to assess the relevance of outputs from all preceding layers and selectively amplify the most critical information for its current processing step.
This targeted approach to information flow allows our models to learn significantly faster, achieving the same level of performance up to three times more quickly than standard models. This efficiency gain is accomplished with a negligible increase in the model's size (less than 0.2% more parameters). Our work enables the development of more powerful and efficient AI systems.
Primary Area: Deep Learning->Attention Mechanisms
Keywords: residual network, cross attention, resnet, transformer
Submission Number: 13577
Loading