Fast and Accurate Reading Comprehension by Combining Self-Attention and Convolution

Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, Quoc V. Le

Feb 15, 2018 (modified: Feb 27, 2018) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Current end-to-end machine reading and question answering (Q\&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q\&A model that does not require recurrent networks: It consists exclusively of attention and convolutions, yet achieves equivalent or better performance than existing models. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model. This data augmentation technique not only enhances the training examples but also diversifies the phrasing of the sentences, which results in immediate accuracy improvements. Our single model achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8.
  • TL;DR: A simple architecture consisting of convolutions and attention achieves results on par with the best documented recurrent models.
  • Keywords: squad, stanford question answering dataset, reading comprehension, attention, text convolutions, question answering