RGL: A Simple yet Effective Relation Graph Augmented Prompt-based Tuning Approach for Few-Shot LearningDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: Pre-trained language models (PLMs) which carry generic knowledge can be a good starting point for adapting to downstream applications. However, it is difficult to generalize PLMs to new tasks with only a limited number of labeled samples given. In this work, we show that Relation Graph augmented Learning RGL method can obtain better performance in few-shot natural language understanding tasks. During learning, RGL constructs a relation graph based on the label consistency between samples in the same batch, and learns to solve the resultant node classification and link prediction problems of the relation graphs. In this way, RGL fully exploits the limited supervised information, which can boost the tuning effectiveness. Extensive experiments on benchmark tasks show that RGL consistently improve the performance of prompt-based tuning strategies.
Paper Type: short
0 Replies

Loading