GSAC: Improving Multi-Document Summarization with Graph Structure-Aware EncoderDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Sequence-to-sequence neural networks have achieved remarkable success in abstractive text summarization. However, current models may not be directly adaptable to the task of multi-document summarization (MDS). In this paper, we propose a neural summarization framework that can effectively process lengthy texts and multiple input documents. We propose a method to seamlessly integrate graph representations into the encoder-decoder model. Additionally, we introduce an extra training objective aimed at maximizing the similarity between the compressed graph text and the ground-truth summary at the node level. Our approach utilizes an innovative method for constructing text graphs to tackle the challenges of applying graph structures in multi-document scenarios. With a base PRIMERA model, our method shows superior performance compared to previous state-of-the-art models on the Multi-news, Multi-XScience and Wikisum datasets.
Paper Type: long
Research Area: Summarization
Contribution Types: NLP engineering experiment
Languages Studied: python
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview