Generative Prompt Tuning for Relation ClassificationDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Prompt tuning is proposed to better tune pre-trained language models by filling the objective gap between the pre-training process and the downstream tasks. Current methods mainly convert the downstream tasks into masked language modeling (MLM) problems, which have proven effective for tasks with simple label sets. However, when applied to relation classification tasks which often exhibit a complex label space, vanilla prompt tuning methods designed for MLM may struggle with handling complex label verbalizations with variable length as in such methods, the locations and number of masked tokens are typically fixed. Inspired by the text infilling task for pre-training generative models that can flexibly predict missing spans, we propose a novel generative prompt tuning method to reformulate relation classification as an infilling problem to eliminate the rigid prompt restrictions, which allows our method to process label verbalizations of varying lengths at multiple predicted positions and thus be able to fully leverage rich semantics of entity and relation labels. In addition, we design entity-guided decoding and discriminative relation scoring to predict relations effectively and efficiently in the inference process. Extensive experiments under low-resource settings and fully supervised settings demonstrate the effectiveness of our approach.
0 Replies

Loading