A comprehensive analysis on attention models

Albert Zeyer, André Merboldt, Ralf Schlüter, Hermann Ney

Oct 22, 2018 NIPS 2018 Workshop IRASL Blind Submission readers: everyone
  • Abstract: Sequence-to-sequence attention-based models are a promising approach for end-to-end speech recognition. The increased model power makes the training procedure more difficult, and analyzing failure modes of these models becomes harder because of the end-to-end nature. In this work, we present various analyses to better understand training and model properties. We investigate on pretraining variants such as growing in depth and width, and their impact on the final performance, which leads to over 8% relative improvement in word error rate. For a better understanding of how the attention process works, we study the encoder output and the attention energies and weights. Our experiments were performed on Switchboard, LibriSpeech and Wall Street Journal.
  • Keywords: attention, encoder-decoder, speech recognition, analysis, pretraining
  • TL;DR: improved pretraining, and analysing encoder output and attention
0 Replies

Loading