Demystifying Cost-Efficiency in LLM Serving over Heterogeneous GPUs

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent advancements in Large Language Models (LLMs) have led to increasingly diverse requests, accompanied with varying resource (compute and memory) demands to serve them. However, this in turn degrades the cost-efficiency of LLM serving as common practices primarily rely on homogeneous GPU resources. In response to this problem, this work conducts a thorough study about serving LLMs over heterogeneous GPU resources on cloud platforms. The rationale is that different GPU types exhibit distinct compute and memory characteristics, aligning well with the divergent resource demands of diverse requests. Particularly, through comprehensive benchmarking, we discover that the cost-efficiency of LLM serving can be substantially optimized by meticulously determining GPU composition, deployment configurations, and workload assignments. Subsequently, we design a scheduling algorithm via mixed-integer linear programming, aiming at deducing the most cost-efficient serving plan under the constraints of price budget and real-time GPU availability. Remarkably, our approach effectively outperforms homogeneous and heterogeneous baselines under a wide array of scenarios, covering diverse workload traces, varying GPU availablilities, and multi-model serving. This casts new light on more accessible and efficient LLM serving over heterogeneous cloud resources.
Lay Summary: This paper aims to address the questions of why and how heterogeneous cloud resources can be utilized for cost-efficient LLM serving. Specifically, we benchmark the cost-efficiency of LLM serving over heterogeneous GPUs, following which, a novel scheduling algorithm is developed. Experimental results demonstrate that our approach outperforms existing works substantially.
Primary Area: Optimization->Large Scale, Parallel and Distributed
Keywords: Distributed Machine Learning System; Generative Inference of Foundation Model.
Submission Number: 7111
Loading