Abstract: Multi-document summarization (MDS) refers to the task of summarizing the text in multiple documents into a concise summary. Abstractive MDS aims to generate a coherent and fluent summary for multiple documents using natural language generation techniques. In this paper, we consider the unsupervised abstractive MDS setting where there are only documents with no ground truth summaries provided, and we propose Absformer, a new Transformer-based method for unsupervised abstractive summary generation. Our method consists of a first step where we pretrain a Transformer-based encoder using the masked language modeling (MLM) objective as the pretraining task in order to cluster the documents into groups with semantically similar documents; and a second step where we train a Transformer-based decoder to generate abstractive summaries for the clusters of documents. To our knowledge, we are the first to successfully incorporate a Transformer-based model to solve the unsupervised abstractive MDS task. We evaluate our approach using three real-world datasets, and we demonstrate substantial improvements in terms of evaluation metrics over state-of-the-art abstractive-based unsupervised methods.
0 Replies
Loading