SummaryMixing: A Linear-Complexity Alternative to Self-Attention for Speech Recognition and Understanding
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: efficient deep learning, speech recognition, spoken language understanding
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Modern speech processing systems rely on self-attention. Unfortunately, token
mixing with self-attention takes quadratic time in the length of the speech utterance,
slowing down inference as well as training and increasing memory consumption.
Cheaper alternatives to self-attention for ASR have been developed, but they fail to
consistently reach the same level of accuracy. However, attention layers in trained
speech recognizers tend to not capture fine-grained pair-wise information. This
paper, therefore, proposes a novel linear-time alternative to self-attention. It sum-
marises a whole utterance with the mean over vectors for all time steps. This single
summary is then combined with time-specific information. We call this method
“SummaryMixing”. Introducing SummaryMixing in state-of-the-art ASR models
makes it feasible to preserve or exceed previous speech recognition performance
while lowering the training and inference times by up to 28% and reducing the
memory budget by a factor of two. The benefits of SummaryMixing can also be
generalized to other speech-processing tasks, such as speech understanding.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: pdf
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5692
Loading