RGL: A Simple yet Effective Relation Graph Augmented Prompt-based Tuning Approach for Few-Shot Learning
Abstract: Pre-trained language models (PLMs) can provide a good starting point for downstream
applications. However, it is difficult to generalize PLMs to new tasks given a few labeled
samples. In this work, we show that Relation
Graph augmented Learning (RGL) can improve
the performance of few-shot natural language
understanding tasks. During learning, RGL
constructs a relation graph based on the label
consistency between samples in the same batch,
and learns to solve the resultant node classification and link prediction problems on the
relation graph. In this way, RGL fully exploits
the limited supervised information, which can
boost the tuning effectiveness. Extensive experimental results show that RGL consistently
improves the performance of prompt-based
tuning strategies.
Loading