Abstract: ive text summarization aims to capture important information from text and integrate contextual information to guide the summary generation. However, effective integration of important and relevant information remains a challenging problem. Existing graph-based methods only consider either word relations or structure information, but neglect the correlation between them. To simultaneously capture the word relations and structure information from sentences, we propose a novel Structure-to-Word dynamic interaction model for Abstractive Sentence Summarization (SWSum). Specifically, we first represent structure and word relation information of sentences by constructing semantic scenario graph and semantic word relation graph based on FrameNet. We subsequently stack multiple graph-based dynamic interaction layers that iteratively enhance their correlation to learn node representations. Finally, a graph fusion module is designed to obtain better overall graph representations, which provide an attention-based context vector for the decoder to generate summary. Experimental results demonstrate our model outperforms existing state-of-the-art methods on two popular benchmark datasets, i.e., Gigaword and DUC 2004.
External IDs:dblp:journals/nca/GuanGL25
Loading