Emergent Predication Structure in Vector Representations of Neural Readers

Hai Wang, Takeshi Onishi, Kevin Gimpel, David McAllester

Nov 04, 2016 (modified: Dec 14, 2016) ICLR 2017 conference submission readers: everyone
  • Abstract: Reading comprehension is a question answering task where the answer is to be found in a given passage about entities and events not mentioned in general knowledge sources. A significant number of neural architectures for this task (neural readers) have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of “predication structure” in the hidden state vectors of a class of neural readers including the Attentive Reader and Stanford Reader. We posits that the hidden state vectors can be viewed as (a representation of) a concatenation [P, c] of a “predicate vector” P and a “constant symbol vector” c and that the hidden state represents the atomic formula P(c). This predication structure plays a conceptual role in relating “aggregation readers” such as the Attentive Reader and the Stanford Reader to “explicit reference readers” such as the Attention-Sum Reader, the Gated-Attention Reader and the Attention-over-Attention Reader. In an independent contribution, we show that the addition of linguistics features to the input to existing neural readers significantly boosts performance yielding the best results to date on the Who-did-What dataset.
  • TL;DR: Provide some novel insights on reading comprehension models and boost the performance of those models
  • Conflicts: ttic.edu
  • Keywords: Natural language processing, Deep learning, Applications