Explanations from Large Language Models Make Small Reasoners Better

Published: 21 Feb 2024, Last Modified: 28 Feb 2024SAI-AAAI2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explanation Generation; Large Language Models; Small Reasoners;
Abstract: Integrating free-text explanations to in-context learning of large language models (LLMs) is shown to elicit strong reasoning capabilities along with reasonable explanations. However, deploying them at scale is costly expensive in real-world applications, limiting their usage. In this paper, we propose a framework leveraging the explanations generated by LLM to improve the training of small reasoners, which are more favorable in real-production deployment due to their low cost. We systematically explore three explanation generation approaches from LLM and utilize a multi-task learning framework to facilitate small models to acquire strong reasoning power together with explanation generation capabilities. Experiments on multiple reasoning tasks show that our method can consistently and significantly outperform standard finetuning baselines especially in few-shot settings by up to 8.1% accuracy, and even perform better than finetuning/prompting a 60x larger GPT-3 (175B) model \footnote{We denote all GPT davinci series as GPT-3 in this paper and assume that their model sizes are 175B following Zhang et al. (2023).} by up to 9.5% in accuracy. As a side benefit, human evaluation further shows that our method can generate competitive explanations to justify its predictions compared to strong GPT-3, moving towards the goal of explainable AI.
Submission Number: 8
Loading