Linguistically-Informed Self-Attention for Semantic Role LabelingDownload PDF

Anonymous

08 Apr 2018 (modified: 08 Apr 2018)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Abstract: The current state-of-the-art end-to-end semantic role labeling (SRL) model is a deep neural network architecture with no explicit linguistic features. However, prior work has shown that gold syntax trees can dramatically improve SRL, suggesting that neural network models could see great improvements from explicit modeling of syntax. In this work, we present linguistically-informed self-attention (LISA): a new neural network model that combines multi-head self-attention with multi-task learning across dependency parsing, part-of-speech, predicate detection and SRL. For example, syntax is incorporated by training one of the attention heads to attend to syntactic parents for each token. Our model can predict all of the above tasks, but it is also trained such that if a high-quality syntactic parse is already available, it can be beneficially injected at test time without re-training our SRL model. In experiments on the CoNLL-2005 SRL dataset LISA achieves an increase of 2.5 F1 absolute over the previous state-of-the-art on newswire with predicted predicates and more than 2.0 F1 on out-of-domain data. On ConLL-2012 English SRL we also show an improvement of more than 3.0 F1, a 13% reduction in error.
TL;DR: Our combination of multi-task learning and self-attention, training the model to attend to parents in a syntactic parse tree, achieves state-of-the-art CoNLL-2005 and CoNLL-2012 SRL results for models using predicted predicates.
Keywords: semantic role labeling, multi-task learning, self-attention
0 Replies

Loading