Keywords: Mixture of Experts, Model Compression
TL;DR: We conducted a detailed investigation into MoE compression, providing a systematic understanding of its efficiency challenges. Building on these insights, we propose a comprehensive approach to further enhance the efficiency.
Abstract: Scaling large language models has driven remarkable advancements across various
domains, yet the continual increase in model size presents significant challenges
for real-world deployment. The Mixture of Experts (MoE) architecture offers a
promising solution by dynamically selecting and activating only a subset of experts
during inference, thus substantially reducing computational costs while preserving
high performance. Despite these benefits, MoE introduces new inefficiencies, such
as excessive parameters and communication overhead. In this work, we present
a holistic study on compression techniques of Mixture of Experts to enhance
both efficiency and scalability. While recent efforts have focused on reducing the
number of experts, these approaches still suffer from considerable communication
and computational costs. To address this, we propose more aggressive strategies,
such as Layer Drop, which removes entire MoE layers, and Block Drop, which
eliminates transformer blocks. Surprisingly, these aggressive structure pruning
techniques not only preserve model performance but also substantially improve
efficiency. Additionally, beyond Expert Trimming, we also introduce Expert
Slimming, which compresses individual experts to further boost performance and
can be seamlessly integrated with Expert Trimming. Extensive experimental results
demonstrate the effectiveness of our proposed methods — Layer Drop and Block
Drop — along with the comprehensive recipe that integrates Expert Slimming and
Expert Trimming, achieving a 6.05× speedup with 77.1% reduced memory usage
while maintaining over 92% of performance on Mixtral-8×7B. Our code will be
made publicly available upon acceptance.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8285
Loading