Keywords: Large-Language-Models(LLMs), Mixture-of-Experts (MoE), Dynamic Expert Clustering, Load Balancing, Memory Optimization, Low-Rank Adaptation
TL;DR: We break the MoE trilemma with a unified framework that dynamically clusters experts and compresses their parameters, achieving better efficiency without sacrificing performance.
Abstract: Mixture-of-Experts (MoE) Large Language Models (LLMs) face a trilemma of load imbalance, parameter redundancy, and communication overhead. We introduce a unified framework based on dynamic expert clustering and structured compression to address these issues cohesively. Our method employs an online clustering procedure that periodically regroups experts using a fused metric of parameter and activation similarity, which stabilizes expert utilization. To our knowledge, this is one of the first frameworks to leverage the semantic embedding capability of the router to dynamically reconfigure the model’s architecture during training for substantial efficiency gains. Within each cluster, we decompose expert weights into a shared base matrix and extremely low-rank residual adapters, achieving up to fivefold parameter reduction per group while preserving specialization. This structure enables a two-stage hierarchical routing strategy: tokens are first assigned to a cluster, then to specific experts within it, drastically reducing the routing search space and the volume of all-to-all communication. Furthermore, a heterogeneous precision scheme, which stores shared bases in FP16 and residual factors in INT4, coupled with dynamic offloading of inactive clusters, reduces peak memory consumption to levels comparable to dense models. Evaluated on GLUE and WikiText-103, our framework matches the quality of standard MoE models while reducing total parameters by approximately 80\%, improving throughput by 10\% to 20\%, and lowering expert load variance by a factor of over three. Our work demonstrates that structural reorganization is a principled path toward scalable, efficient, and memory-effective MoE LLMs.
Code for experiments is available at https://anonymous.4open.science/r/SUBMIT-0001/README.md
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 19381
Loading