Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot LearnersDownload PDF

Published: 04 Mar 2023, Last Modified: 14 Apr 2024ME-FoMo 2023 SpotlightReaders: Everyone
Keywords: natural language processing, zeroshot language models, large language models
TL;DR: We introduce Flipped Learning, a instruction-tuning method that computes the likelihood of the task instruction given input instance and label.
Abstract: Instruction-tuning, which fine-tunes the language model (LM) on various downstream tasks with task instruction, has improved the zero-shot task generalization performance. However, instruction-tuned LMs still struggle to generalize to challenging unseen tasks containing novel labels. In this paper, we propose Flipped Learning, an alternative method of instruction-tuning which trains the LM to generate the task instruction given the input instance and label. During inference, the LM trained with Flipped Learning, referred to as FLIPPED, selects the label option that is most likely to generate the task instruction. On 14 tasks of the BIG-bench benchmark, the 11B-sized FLIPPED outperforms zero-shot T0-11B and even a 16 times larger 3-shot GPT-3 (175B) on average by 8.4% and 9.7% points, respectively. Flipped Learning gives particularly large improvements on tasks with unseen labels, outperforming T0-11B by up to +20% average F1 score. This indicates that the strong task generalization of Flipped Learning comes from improved generalization to novel labels.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2210.02969/code)
0 Replies

Loading