PuzzleMoE: Efficient Compression of Large Mixture-of-Experts Models via Sparse Expert Merging and Bit-packed inference
Keywords: Mixture of Experts, Large Language Models, Model Compression, Model Merging
Abstract: Mixture-of-Experts (MoE) models have shown strong potential in scaling language models efficiently by activating only a small subset of experts per input. However, their widespread deployment remains limited due to the high memory overhead associated with storing all expert parameters, particularly as the number of experts increases. To address this challenge, prior works have explored expert dropping and merging strategies, yet they often suffer from performance drop at high compression ratios. In this paper, we introduce \name, a training-free MoE compression method that achieves both high accuracy and efficient inference through two key innovations: First, \name performs sparse expert merging by identifying element-wise weight redundancy and specialization. It uses a dual-mask to capture both shared and expert-specific parameters. Second, to avoid the overhead of storing binary masks and signs, \name introduces a bit-packed encoding scheme that reuses underutilized exponent bits, enabling efficient MoE inference on GPUs. Extensive experiments demonstrate that \name can compress MoE models by up to 50\% while maintaining accuracy across various tasks. Specifically, it outperforms prior MoE compression methods by up to 16.7\% on MMLU at 50\% compression ratio, and achieves up to 1.28$\times$ inference speedup.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 4218
Loading