Abstract: The prevalence of online meetings, such as Zoom and Microsoft Teams, has highlighted the necessity for an effective dialogue summary. This study proposes Contrastive Topic-Length Prompt Learning (CTL-Prompt), a simple method that generates topic-based summaries. First, we used topic prompts to direct our dialogue summarization in order to steer the summary towards a particular topic in light of recent success with prompts in guiding aspects in general summarization. Nevertheless, our preliminary experiment revealed that depending solely on the topic prompt frequently leads to mostly identical summaries across topics. We further added a length control prompt that controls the length of the generated summaries based on the length of the reference summaries for each topic. While it was able to generate a more concise summary, the summaries across topics remained similar. To promote the model to produce concise yet diverse summaries across topics, we propose the use of contrastive learning on topic-length prompts, which make use of positive and negative pairs to enforce the models to learn the similarities and differences of different topics. Experimental results showed that our model outperformed other baseline models in the ROUGE and BERT scores on the DialogSum dataset. This result was reproduced in the MACSum dataset, and similar results were found. Our work is available at [anonymized].
Paper Type: long
Research Area: Summarization
Contribution Types: Publicly available software and/or pre-trained models
Languages Studied: English
0 Replies
Loading