G-Boost: Boosting Private SLMs with General LLMs

ACL ARR 2025 May Submission3771 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Due to the limited computational resources, most Large Language Model (LLM) developers can only fine-tune Small Language Models (SLMs) on their own data. However, these private SLMs typically have limited effectiveness. To enhance the performance of private SLMs, this paper proposes to ask general LLMs for help. The general LLMs can be APIs or larger LLMs whose inference cost the developers can afford. Specifically, we propose the G-Boost framework, in which a private SLM adaptively performs collaborative inference with a general LLM under the guidance of process reward. Experiments demonstrate that our framework can significantly boost the performance of private SLMs.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: NLP in resource-constrained settings
Contribution Types: Approaches to low-resource settings
Languages Studied: English
Submission Number: 3771
Loading