STT: Soft Template Tuning for Few-Shot LearningDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: With the rapid expansion of large pre-trained language models, fine-tuning all the model parameters for downstream tasks is becoming computationally prohibitive. The recently developed prompt-based methods freeze the entire model parameters and only update the so-called prompt parameters appended to the inputs, significantly reducing the burden of fully fine-tuning. However, standard prompt-based methods mainly consider the case where sufficient data of downstream tasks are available. It is still unclear whether the advantage can be transferred to the few-shot regime, where only limited data are available for each downstream task. Our empirical studies suggest there is still a gap between prompt tuning and fully fine-tuning for few-shot learning. We propose a new prompt-tuning framework, called Soft Template Tuning (STT), to bridge the gap. STT combines manual prompts and auto-prompts, and treats downstream classification tasks as a masked language modeling task. STT can close the gap between fine-tuning and prompt-based methods without introducing additional parameters. Importantly, it can even outperform the time- and resource-consuming fine-tuning method on sentiment classification tasks.
Paper Type: short
0 Replies

Loading