Abstract: Summarization of long sequences into a concise statement is a core problem in natural language processing, requiring non-trivial understanding of the input. Based on the promising results of graph neural networks on highly structured data, we develop a framework to extend existing sequence encoders with a graph component that can reason about long-distance relationships in weakly structured data such as text. In an extensive evaluation, we show that the resulting hybrid sequence-graph models outperform both pure sequence models as well as pure graph models on a range of summarization tasks.
Keywords: Summarization, Graphs, Source Code
TL;DR: One simple trick to improve sequence models: Compose them with a graph model
Code: [![github](/images/github_icon.svg) CoderPat/structured-neural-summarization](https://github.com/CoderPat/structured-neural-summarization) + [![Papers with Code](/images/pwc_icon.svg) 2 community implementations](https://paperswithcode.com/paper/?openreview=H1ersoRqtm)
Data: [CNN/Daily Mail](https://paperswithcode.com/dataset/cnn-daily-mail-1)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/structured-neural-summarization/code)
15 Replies
Loading