FusionNet: Fusing via Fully-aware Attention with Application to Machine ComprehensionDownload PDF

15 Feb 2018 (modified: 10 Feb 2022)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of "History of Word" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it identifies an attention scoring function that better utilizes the "history of word" concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.
TL;DR: We propose a light-weight enhancement for attention and a neural architecture, FusionNet, to achieve SotA on SQuAD and adversarial SQuAD.
Keywords: Attention Mechanism, Machine Comprehension, Natural Language Processing, Deep Learning
Code: [![github](/images/github_icon.svg) momohuang/FusionNet-NLI](https://github.com/momohuang/FusionNet-NLI) + [![Papers with Code](/images/pwc_icon.svg) 2 community implementations](https://paperswithcode.com/paper/?openreview=BJIgi_eCZ)
Data: [SQuAD](https://paperswithcode.com/dataset/squad)
28 Replies

Loading