Understanding the Role of Self Attention for Efficient Speech RecognitionDownload PDF

29 Sept 2021, 00:30 (edited 15 Feb 2022)ICLR 2022 SpotlightReaders: Everyone
  • Keywords: transformer, self attention, speech recognition
  • Abstract: Self-attention (SA) is a critical component of Transformer neural networks that have succeeded in automatic speech recognition (ASR). In this paper, we analyze the role of SA in Transformer-based ASR models for not only understanding the mechanism of improved recognition accuracy but also lowering the computational complexity. We reveal that SA performs two distinct roles: phonetic and linguistic localization. Especially, we show by experiments that phonetic localization in the lower layers extracts phonologically meaningful features from speech and reduces the phonetic variance in the utterance for proper linguistic localization in the upper layers. From this understanding, we discover that attention maps can be reused as long as their localization capability is preserved. To evaluate this idea, we implement the layer-wise attention map reuse on real GPU platforms and achieve up to 1.96 times speedup in inference and 33% savings in training time with noticeably improved ASR performance for the challenging benchmark on LibriSpeech dev/test-other dataset.
  • One-sentence Summary: We analyze the role of self attention in Transformer-based speech recognition and present a practical technique to design a model that accelerates the inference and improve the performance.
  • Supplementary Material: zip
15 Replies