Abstractive Dialogue Summarization Based on Dynamic Pattern Exploiting Training

Published: 01 Jan 2022, Last Modified: 15 Feb 2025IJCNN 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Pre-trained language models (PLMs) have shown remarkable performance in natural language processing tasks, whereas these approaches often require a massive amount of data. Due to the lack of sufficient training instances, it is challenging for existing PLMs to achieve good results on dialogue summarization. In this paper, we propose DynamicPET, a pattern-exploiting training (PET) based method for abstractive dialogue summarization, which leverages the recent prompt learning paradigm to boost the performance of PLMs. In contrast to PET, our method does not rely on any task-specific unlabeled data, but obtains strong performance on two dialogue summarization datasets, especially in the few-shot scenarios.
Loading