Context-Aware Prompt: Customize A Unique Prompt For Each InputDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: After the proposal of BERT, pre-trained language models have become the dominant approach for solving many NLP tasks. Typically, a linear classifier is added to the head of the model for fine-tuning to fit downstream tasks, while a more recent approach, also known as prompt-based learning or prompt-learning, using prompts to perform various downstream tasks, is considered to be able to uncover the potential of the language model.Prior study, however, attempted to find a universal prompt for a certain task across all samples. Therefore, we propose a novel method, Context-Aware Prompt (CAP), which provides a unique continuous prompt for each sample input by combining contextual information to further investigate the potential capabilities of the language models. On the SuperGlue benchmark, our method outperforms multiple models with vanilla fine-tuning. Furthermore, we extend the use of prompts to include Replaced Token Detection (RTD) type prompts, allowing models like ELECTRA and DeBERTaV3 that employ RTD as a training objective to use prompts for downstream tasks.
Paper Type: long
0 Replies

Loading