Generating Wikipedia by Summarizing Long Sequences

Peter J. Liu*, Mohammad Saleh*, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, Noam Shazeer

Feb 15, 2018 (modified: Oct 27, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.
  • TL;DR: We generate Wikipedia articles abstractively conditioned on source document text.
  • Keywords: abstractive summarization, Transformer, long sequences, natural language processing, sequence transduction, Wikipedia, extractive summarization
0 Replies

Loading