Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt TuningDownload PDF

Anonymous

16 Oct 2021 (modified: 05 May 2023)ACL ARR 2021 October Blind SubmissionReaders: Everyone
Abstract: Prompt-based learning for Pre-trained Language Models (PLMs) has achieved remarkable performance in few-shot learning by exploiting prompts as task guidance and turning downstream tasks into masked language problems. In most existing approaches, the high performance of prompt-based learning heavily relies on handcrafted prompts and verbalizers, which may limit the application of such approaches in real-world scenarios. To solve this issue, we present CP-Tuning, the first end-to-end Contrastive Prompt Tuning framework for PLMs without any manual engineering of task-specific prompts and verbalizers. It is integrated with the task-invariant continuous prompt encoding technique with fully trainable prompt parameters. We further propose a pair-wise cost-sensitive contrastive loss to optimize the model in order to achieve verbalizer-free class mapping and enhance the task-invariance of prompts. Experiments over a variety of NLP tasks show CP-Tuning consistently outperforms state-of-the-art methods.
0 Replies

Loading