Learning to Generate Instructions to Adapt Language Models to New Tasks

Published: 28 Oct 2023, Last Modified: 26 Nov 2023Instruction Workshop @ NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: task generation, instruction tuning, domain adaptation
TL;DR: We present Bonito, the first open-source model for conditional task generation: the problem of converting unannotated corpus into a collection of tasks for instruction tuning.
Abstract: We present Bonito, the first open-source model for conditional task generation: the problem of converting unannotated corpus into a collection of tasks for instruction tuning. Our goal is to enable efficient task adaptation of instruction tuned language models on users' specialized, private data without relying on proprietary API-access-only models like GPT-4. We create Bonito by remixing existing, general-purpose instruction tuning data into a new training mixture for conditional task generation. Bonito learns to generate new tasks conditioned on the text and desired task type. The generated instructions in the specialized domain can be used to further train language models. We demonstrate that this procedure leads to improved performance on extractive question answering and yes-no question answering: across four datasets, each in a different domain, Bonito improves the F1 score of FLAN T5 Small by an average of 14.5% and FLAN-T5 Base by an average of 4.4%. We also find that Bonito improves FLAN-T5 Large on two out of four datasets but shows a slight negative transfer on the other two datasets. Overall, these results show a promising direction for adapting instruction tuned language models to new tasks without using proprietary models.
Submission Number: 45
Loading