Track: Systems and infrastructure for Web, mobile, and WoT
Keywords: Service-Level Objective, LLM Inference Engine, Performance Tuning, Bayesian Optimization, Cloud Computing
TL;DR: This paper introduces SCOOT, which automatically tunes the parameters of LLM inference engines using Bayesian optimization and constraint learning to optimize service-level objectives for LLM inference services.
Abstract: As large language models (LLMs) are gaining increasing popularity across a wide range of web applications, it is of great importance to optimize service-level objectives (SLOs) for LLM inference services to enhance user satisfaction and improve the competitiveness of cloud vendors. In this paper, we observe that adjusting the parameters of LLM inference engines can improve service performance, and the optimal parameter configurations of different services are different. Therefore, we propose SCOOT, an automatic performance tuning system to optimize SLOs for each LLM inference service by tuning the parameters of the inference engine. SCOOT jointly exploits single-objective and multiple-objective Bayesian optimization (BO) techniques to handle various optimization objectives via exploration and exploitation. Moreover, SCOOT prunes the search space with known constraints and adopts a random forest to learn hidden constraints during the tuning process to mitigate invalid exploration. To improve the tuning efficiency, SCOOT utilizes the parallel suggestion to accelerate the tuning process. Extensive experiments demonstrate that SCOOT considerably outperforms existing tuning techniques in SLO optimization while greatly improving the tuning efficiency. SCOOT is universally applicable to various LLM inference engines and is easily expandable to new parameters. Currently, SCOOT has already been implemented in the production environment of a leading international technology company.
Submission Number: 863
Loading