Targeted training for numerical reasoning with large language models

Published: 01 Jan 2025, Last Modified: 13 May 2025Knowl. Inf. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: After recent gains achieved by large language models (LLMs) on numerical reasoning tasks, it has become of interest to have LLMs teach small models to improve on numerical reasoning. Instructing LLMs to generate Chains of Thought to fine-tune small models is an established approach. However, small models are passive in this line of work and may not be able to exploit the provided training data. In this paper, we propose a novel targeted training strategy to match LLM’s assistance with small models’ capacities. The small model will proactively request LLM’s assistance when it sifts out confusing training data. Then, LLM refines such data by successively revising reasoning steps and reducing question complexity before feeding the small model. Experiments show that this targeted training approach remarkably improves the performance of small models on a range of numerical reasoning datasets by 12–25%, making small models even competitive with some LLMs.
Loading