Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Reasoning with Memory Augmented Neural Networks for Language Comprehension
Tsendsuren Munkhdalai, Hong Yu
Nov 03, 2016 (modified: Mar 02, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:Hypothesis testing is an important cognitive process that supports human reasoning. In this paper, we introduce a computational hypothesis testing approach based on memory augmented neural networks. Our approach involves a hypothesis testing loop that reconsiders and progressively refines a previously formed hypothesis in order to generate new hypotheses to test. We apply the proposed approach to language comprehension task by using Neural Semantic Encoders (NSE). Our NSE models achieve the state-of-the-art results showing an absolute improvement of 1.2% to 2.6% accuracy over previous results obtained by single and ensemble systems on standard machine comprehension benchmarks such as the Children's Book Test (CBT) and Who-Did-What (WDW) news article datasets.
Keywords:Natural language processing, Deep learning
Enter your feedback below and we'll get back to you as soon as possible.