Distilling LLMs’ Decomposition Abilities into Compact Language Models

Published: 13 Jun 2024, Last Modified: 28 Jun 2024ICML 2024 Workshop AI4MATH PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reasoning, reinforcement learning, dataset, benchmark
TL;DR: In this work we develop AI-generated benchmark for the distillation of LLMs' decomposition abilities into smaller models and provide multiple baselines
Abstract: Large Language Models (LLMs) have demonstrated proficiency in their reasoning abilities, yet their large size presents scalability challenges and limits any further customization. In contrast, compact models offer customized training but often fall short in solving complex reasoning tasks. This study focuses on distilling the LLMs' decomposition skills into compact models using offline reinforcement learning. We leverage the advancements in the LLM`s capabilities to provide feedback and generate a specialized task-specific dataset for training compact models. The development of an AI-generated dataset and the establishment of baselines constitute the primary contributions of our work, underscoring the potential of compact models in replicating complex problem-solving skills.
Submission Number: 2
Loading