Paper Link: https://openreview.net/forum?id=tbpEVzeiCto
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: Prompt tuning is a new, efficient NLP transfer learning paradigm that adds a task-specific prompt in each input instance during the model training stage. It freezes the pre-trained language model and only optimizes a few task-specific prompts. In this paper, we propose a conditional prompt generation method to generate prompts for each input instance, referred to as the Instance-Dependent Prompt Generation (IDPG). Unlike traditional prompt tuning methods that use a fixed prompt, IDPG introduces a lightweight and trainable component to generate prompts based on each input sentence. Extensive experiments on ten natural language understanding (NLU) tasks show that the proposed strategy consistently outperforms various prompt tuning baselines and is on par with other efficient transfer learning methods such as Compacter while tuning far fewer model parameters.
Copyright Consent Signature (type Name Or NA If Not Transferrable): Zhuofeng Wu
Copyright Consent Name And Address: The University of Michigan School of Information, 105 S State St, Ann Arbor, MI 48109
Presentation Mode: This paper will be presented in person in Seattle
0 Replies
Loading