Sequence-to-Sequence RNNs for Text Summarization

Ramesh Nallapati, Bing Xiang, Bowen Zhou

Feb 18, 2016 (modified: Feb 18, 2016) ICLR 2016 workshop submission readers: everyone
  • Abstract: In this work, we cast text summarization as a sequence-to-sequence problem and apply the attentional encoder-decoder RNN that has been shown to be successful for Machine Translation. Our experiments show that the proposed architecture significantly outperforms the state-of-the art model of Rush et. al. (2015), on the Gigaword dataset without any additional tuning. We also propose additional extensions to the standard architecture, which we show contribute to further improvement in performance.
  • Conflicts: us.ibm.com

Loading