KNIFE: Distilling Meta-Reasoning Knowledge with Free-Text RationalesDownload PDF

Published: 04 Mar 2023, Last Modified: 31 Mar 2023ICLR 2023 Workshop on Trustworthy ML PosterReaders: Everyone
Keywords: free-text rationales, explanation-based learning, knowledge distillation, language model, question answering, text classification, reasoning
TL;DR: We propose KNIFE, a method for distilling general reasoning knowledge from free-text rationales into language models.
Abstract: Recent works have explored using free-text rationales (FTRs)---i.e., natural language explanations of a task output---to teach language models (LMs) how to solve NLP tasks. In these works, the LM is often finetuned or prompted to jointly generate the FTR and task output. However, this approach either involves finetuning LMs on possibly conflicting objectives or prompting prohibitively large LMs. To address this, we propose KNIFE, which guides LM reasoning via FTR knowledge distillation, instead of via FTR generation. KNIFE first finetunes an FTR-augmented teacher LM to predict the task output, then finetunes a student LM so that its hidden states are aligned with the teacher's. As a result, the student LM learns general reasoning knowledge from the FTRs and can be used for inference, without FTR generation or large LMs. On two question answering datasets, we show that KNIFE outperforms various baselines in both fully-supervised and low-resource settings. Also, using two more datasets, we analyze KNIFE's failure modes and identify FTR quality as critical to KNIFE performance.
0 Replies

Loading