Towards Unified Prompt Tuning for Few-shot LearningDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot learning by employing task-specific prompts. However, PLMs are unfamiliar with the prompt-style expressions during pre-training, which limits the few-shot learning performance on downstream tasks. It would be desirable if models can acquire some prompting knowledge before task adaptation. We present the Unified Prompt Tuning (UPT) framework, leading to better few-shot learning for BERT-style models by explicitly capturing prompting semantics from non-target NLP datasets. In UPT, a novel paradigm Prompt-Options-Verbalizer is proposed for joint prompt learning across different NLP tasks, forcing PLMs to capture task-invariant prompting knowledge. We further design a self-supervised task named Knowledge-enhanced Selective Masked Language Modeling to improve the PLM's generalization abilities for accurate adaptation to previously unseen tasks. After multi-task learning, the PLM can be fine-tuned for any target few-shot NLP tasks using the same prompting paradigm. Experiments over a variety of NLP tasks show that UPT consistently outperforms state-of-the-arts for prompt-based fine-tuning.
0 Replies

Loading