FloE: On-the-Fly MoE Inference on Memory-constrained GPU

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC 4.0
TL;DR: An on-the-fly MoE inference system on memory-constrained GPU, founded on the insight that substantial untapped redundancy exists within sparsely activated experts.
Abstract: With the widespread adoption of Mixture-of-Experts (MoE) models, there is a growing demand for efficient inference on memory-constrained devices. While offloading expert parameters to CPU memory and loading activated experts on demand has emerged as a potential solution, the large size of activated experts overburdens the limited PCIe bandwidth, hindering the effectiveness in latency-sensitive scenarios. To mitigate this, we propose FloE, an on-the-fly MoE inference system on memory-constrained GPUs. FloE is built on the insight that there exists substantial untapped redundancy within sparsely activated experts. It employs various compression techniques on the expert's internal parameter matrices to reduce the data movement load, combined with low-cost sparse prediction, achieving perceptible inference acceleration in wall-clock time on resource-constrained devices. Empirically, FloE achieves a 9.3$\times$ compression of parameters per expert in Mixtral-8$\times$7B; enables deployment on a GPU with only 11GB VRAM, reducing the memory footprint by up to 8.5$\times$; and delivers a 48.7$\times$ inference speedup compared to DeepSpeed-MII on a single GeForce RTX 3090—all with only a 4.4\% $\sim$ 7.6\% average performance degradation.
Lay Summary: Language models known as "Mixture-of-Experts" (MoE) are powerful tools, but their huge size makes it difficult to run them quickly on devices with limited memory, such as consumer-grade GPUs. To manage this, some systems temporarily store model parts on slower memory (e.g., CPU main memory) and load them only when needed—but this method is slow, especially when quick responses are crucial. We developed a new approach called FloE, which cleverly compresses parts of these models to significantly speed up the process. FloE finds hidden redundancies within the model's expert components—essentially, unnecessary details that can be trimmed without significantly harming accuracy. By reducing the size of the experts’ internal data, FloE lets these large models fit comfortably into small memory spaces. Our tests show that FloE makes these models almost 49 times faster on common GPUs and reduces the memory requirement dramatically, all while maintaining excellent performance. This advancement makes powerful machine learning tools accessible for more users, even with limited hardware resources.
Primary Area: Deep Learning->Large Language Models
Keywords: Mixture-of-Experts, Efficient Inference, Model Compression, Experts Offloading
Submission Number: 6552
Loading