Tree-Structured Attention with Hierarchical AccumulationDownload PDF

Xuan-Phi Nguyen, Shafiq Joty, Steven Hoi, Richard Socher

Published: 20 Dec 2019, Last Modified: 03 Apr 2024ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: Tree, Constituency Tree, Hierarchical Accumulation, Machine Translation, NMT, WMT, IWSLT, Text Classification, Sentiment Analysis
Abstract: Incorporating hierarchical structures like constituency trees has been shown to be effective for various natural language processing (NLP) tasks. However, it is evident that state-of-the-art (SOTA) sequence-based models like the Transformer struggle to encode such structures inherently. On the other hand, dedicated models like the Tree-LSTM, while explicitly modeling hierarchical structures, do not perform as efficiently as the Transformer. In this paper, we attempt to bridge this gap with Hierarchical Accumulation to encode parse tree structures into self-attention at constant time complexity. Our approach outperforms SOTA methods in four IWSLT translation tasks and the WMT'14 English-German task. It also yields improvements over Transformer and Tree-LSTM on three text classification tasks. We further demonstrate that using hierarchical priors can compensate for data shortage, and that our model prefers phrase-level attentions over token-level attentions.
Data: [SST](, [SST-2](, [SST-5](
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](
Original Pdf: pdf
5 Replies