Machine Comprehension Using Match-LSTM and Answer PointerDownload PDF

Jun 20, 2021 (edited Mar 13, 2017)ICLR 2017 conference submissionReaders: Everyone
  • TL;DR: Using Match-LSTM and Answer Pointer to select a variable length answer from a paragraph
  • Abstract: Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al. (2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our tasks. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al. (2016) using logistic regression and manually crafted features. Besides, our boundary model also achieves the best performance on the MSMARCO dataset (Nguyen et al. 2016).
  • Keywords: Natural language processing, Deep learning
  • Conflicts:
18 Replies