Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained ModelsDownload PDF

Anonymous

16 Feb 2022 (modified: 05 May 2023)ACL ARR 2022 February Blind SubmissionReaders: Everyone
Abstract: Pre-trained masked language models have been successfully used for few-shot learning by formulating downstream tasks as text infilling. However, discriminative pre-trained models like ELECTRA, as a strong alternative in full-shot settings, does not fit into the paradigm. In this work, we adapt prompt-based few-shot learning to ELECTRA and show that it outperforms masked language models in a wide range of tasks. ELECTRA is pre-trained to distinguish if a token is generated or original. We naturally extend that to prompt-based few-shot learning by training to score the originality of the verbalizers without introducing new parameters. Our method can be easily adapted to tasks involving multi-token verbalizers without extra computation overhead. Analysis shows that the distributions learned by ELECTRA align better with downstream tasks.
Paper Type: short
0 Replies

Loading