Capturing Semantic Relationships Using Full Dependency Forests to Improve Consistency in Long Document Summarization

Published: 01 Jan 2025, Last Modified: 22 Jul 2025IEEE Access 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: There are complex discourse relationships between sentences, which can be viewed as a tree structure. This semantic structure provides important information for summarization and helps to generate concise and coherent summaries. However, current neural network-based models usually treat articles as simple sentence sequences, ignoring the intrinsic structure. To integrate discourse tree information, we propose a generative summarization model that incorporates tree structure. The article’s structure can be more accurately captured by this model, which can also produce succinct summaries by leveraging the semantic dependencies of the source material. Also, since large models are difficult to apply in downstream tasks, we try to add noise to the pre-training parameters to improve the performance of the model on the long document summarization task. Experimental results show that our model ROUGE scores outperform the state-of-the-art best models in both pubMed and arXiv datasets. We further performed human evaluation, and N-gram evaluation. The results show that our method also improves the cohesiveness and semantic coherence of abstracts.
Loading