A Coarse-to-Fine Training Paradigm for Dialogue SummarizationOpen Website

2022 (modified: 20 Dec 2022)ICANN (1) 2022Readers: Everyone
Abstract: Pre-trained language models (PLMs) have achieved promising results on dialogue summarization. Previous works mainly encode semantic features from wordy dialogues to help PLMs model dialogues, but extracting those features from the original dialogue text is costly. Besides, the resulting semantic features may be also redundant, which is harmful for PLMs to catch the dialogue’s main idea. Without searching for dispensable features, this paper proposes a coarse-to-fine training paradigm for dialogue summarization. Instead of directly fine-tuning PLMs to obtain complete summaries, this paradigm constructs a coarse-grained summarizer which automatically infers the key information to annotate each dialogue. Further, a fine-grained summarizer would generate detailed summaries based on the annotated dialogues. Moreover, to utilize the knowledge from out-of-domain pre-training, a meta learning mechanism is adopted, which could cooperate with our training paradigm and help the model pre-trained on other domains adapt to the dialogue summarization. Experimental results demonstrate that our method could outperform competitive baselines.
0 Replies

Loading