RGL: A Simple yet Effective Relation Graph Augmented Prompt-based Tuning Approach for Few-Shot LearningDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=O_W4h-44Rzf
Paper Type: Short paper (up to four pages of content + unlimited references and appendices)
Abstract: Pre-trained language models (PLMs) can provide a good starting point for downstream applications. However, it is difficult to generalize PLMs to new tasks given a few labeled samples. In this work, we show that Relation Graph augmented Learning (RGL) can improve the performance of few-shot natural language understanding tasks. During learning, RGL constructs a relation graph based on the label consistency between samples in the same batch, and learns to solve the resultant node classification and link prediction problems on the relation graph. In this way, RGL fully exploits the limited supervised information, which can boost the tuning effectiveness. Extensive experimental results show that RGL consistently improves the performance of prompt-based tuning strategies.
Presentation Mode: This paper will be presented virtually
Virtual Presentation Timezone: UTC+8
Copyright Consent Signature (type Name Or NA If Not Transferrable): Yaqing Wang
Copyright Consent Name And Address: Baidu Inc., Baidu Technology Park
0 Replies

Loading