Training Task Experts through Retrieval Based Distillation

ACL ARR 2024 June Submission3921 Authors

16 Jun 2024 (modified: 05 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: One of the most reliable ways to create deployable models for specialized tasks is to obtain an adequate amount of high-quality task-specific data. However, for specialized tasks, often such datasets do not exist.Existing methods address this by creating such data from large language models (LLMs) and then distilling such knowledge into smaller models. However, these methods are limited by the quality of the LLMs output, and tend to generate repetitive or incorrect data. In this work, we present Retrieval Based Distillation (ReBase), a method that first retrieves data from rich online sources and then transforms them into domain-specific data. This method greatly enhances data diversity. Moreover, ReBase generates Chain-of-Thought reasoning and distills the reasoning capacity of LLMs. We test our method on 4 benchmarks and shows that our method significantly improves performance by up to 10.76% on SQuAD, 1.37% on MNLI, and 1.94% on BBH.
Paper Type: Long
Research Area: Special Theme (conference specific)
Research Area Keywords: Retrieval, Distillation, Task-Expert
Contribution Types: Approaches to low-resource settings, Approaches low compute settings-efficiency, Data resources, Data analysis
Languages Studied: English
Submission Number: 3921
Loading