Delta Decompression for MoE-based LLMs Compression

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: In this paper, we present a novel approach to compressing Mixture of Experts LLMs based on delta decompression.
Abstract: Mixture-of-Experts (MoE) architectures in large language models (LLMs) achieve exceptional performance, but face prohibitive storage and memory requirements. To address these challenges, we present $D^2$-MoE, a new delta decompression compressor for reducing the parameters of MoE LLMs. Based on observations of expert diversity, we decompose their weights into a shared base weight and unique delta weights. Specifically, our method first merges each expert's weight into the base weight using the Fisher information matrix to capture shared components. Then, we compress delta weights through Singular Value Decomposition (SVD) by exploiting their low-rank properties. Finally, we introduce a semi-dynamical structured pruning strategy for the base weights, combining static and dynamic redundancy analysis to achieve further parameter reduction while maintaining input adaptivity. In this way, our $D^2$-MoE successfully compacts MoE LLMs to high compression ratios without additional training. Extensive experiments highlight the superiority of our approach, with over 13\% performance gains than other compressors on Mixtral|Phi-3.5|DeepSeek|Qwen2 MoE LLMs at 40$\sim$60\% compression rates. Codes are available in https://github.com/lliai/D2MoE.
Lay Summary: We present $D^2$-MoE, a new compression framework for MoE LLMs. $D^2$-MoE decomposes expert weights into a shared base weight and unique delta weights. The delta weights are then compressed using SVD, and the base weight is further compressed using a semi-dynamical structured pruning strategy. Experimental results show that $D^2$-MoE outperforms existing methods, maintaining high accuracy and low perplexity even at high compression rates.
Primary Area: Applications->Language, Speech and Dialog
Keywords: Mixture of Experts, Efficient Large Language Models, Delta Decompression, Structured Compression
Submission Number: 98
Loading