Abstract: Highlights•Previous Semantic-based few-shot methods focus on designing complex fusion modules, while ignoring the generalization capacity of language models.•We propose a simple framework, which fully exploits the LM with learnable prompts.•Ours performance in 1-shot learning surpasses the current SOTA by 3.3% in accuracy.
Loading