Abstract: Scaling large language models has driven remarkable advancements across various domains, yet the continual increase in model size presents significant challenges for real-world deployment. The Mixture of Experts (MoE) architecture offers a promising solution by dynamically selecting and activating only a subset of experts during inference, thus substantially reducing computational costs while preserving high performance. Despite these benefits, MoE introduces new inefficiencies, such as excessive parameters and communication overhead. In this work, we present a holistic study of compression techniques for Mixture of Experts to enhance both efficiency and scalability. While recent efforts have focused on Expert Trimming, which reduces the number of experts, these approaches still suffer from considerable communication and computational costs. To address this, we propose more aggressive strategies, such as Layer Drop, which removes entire MoE layers, and Block Drop, which eliminates transformer blocks. Surprisingly, these aggressive pruning techniques not only preserve model performance but also substantially improve computation and memory efficiency. Furthermore, beyond Expert Trimming, we also introduce Expert Slimming, which compresses individual experts to further boost performance and can be seamlessly integrated with Expert Trimming. Extensive experimental results demonstrate the effectiveness of our proposed methods—Layer Drop and Block Drop—along with the comprehensive recipe that integrates Expert Slimming and Expert Trimming, achieving a 6.05× speedup with 77.1% reduced memory usage while maintaining over 92% of performance on Mixtral-8×7B. Our code is released at https://github.com/CASE-Lab-UMD/Unified-MoE-Compression.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/CASE-Lab-UMD/Unified-MoE-Compression
Supplementary Material: zip
Assigned Action Editor: ~Huaxiu_Yao1
Submission Number: 3792
Loading