Hierarchical Attention: What Really Counts in Various NLP TasksDownload PDF

27 Sept 2018 (modified: 22 Oct 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Attention mechanisms in sequence to sequence models have shown great ability and wonderful performance in various natural language processing (NLP) tasks, such as sentence embedding, text generation, machine translation, machine reading comprehension, etc. Unfortunately, existing attention mechanisms only learn either high-level or low-level features. In this paper, we think that the lack of hierarchical mechanisms is a bottleneck in improving the performance of the attention mechanisms, and propose a novel Hierarchical Attention Mechanism (Ham) based on the weighted sum of different layers of a multi-level attention. Ham achieves a state-of-the-art BLEU score of 0.26 on Chinese poem generation task and a nearly 6.5% averaged improvement compared with the existing machine reading comprehension models such as BIDAF and Match-LSTM. Furthermore, our experiments and theorems reveal that Ham has greater generalization and representation ability than existing attention mechanisms.
Keywords: attention, hierarchical, machine reading comprehension, poem generation
TL;DR: The paper proposed a novel hierarchical model to replace the original attention model in various NLP tasks.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 5 code implementations](https://www.catalyzex.com/paper/arxiv:1808.03728/code)
5 Replies

Loading