PromptSum: Planning with Mixed Prompts for Parameter-Efficient Controllable Abstractive SummarizationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: summarization, controllability, parameter-efficiency, prompt-tuning, pre-training, multi-tasking
Abstract: Prompt tuning (PT), a technique that only tunes the additional prompt embeddings while keeping the backbone pretrained language model frozen, has shown promising results in language understanding tasks, especially in low-resource scenarios. However, there lacks better prompt design methods for generation tasks such as summarization. At the same time, summarization guided through instructions (discrete prompts) can achieve a desirable double objective of higher quality and controllability in summary generation. Towards a triple goal of data-efficiency, parameter-efficiency and controllability, we introduce PromptSum, a method combining PT with a multi-task objective and discrete entity prompts for abstractive summarization. Our model achieves state-of-the-art results on several popular few-shot benchmarks as well as a strong level of controllability through entities, all while only tuning several orders of magnitude less parameters.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
TL;DR: A new prompting mechanism which enables controllable, parameter-efficient and data-efficient summarization.
9 Replies

Loading