HookMoE: A learnable performance compensation strategy of Mixture-of-Experts for LLM inference acceleration

ACL ARR 2025 May Submission1711 Authors

18 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Mixture of Experts (MoE) architectures have emerged as a promising paradigm for scaling model capacity through top-$k$ routing mechanisms. Although reducing the number of activated experts inherently enables inference acceleration, this efficiency gain typically comes at the cost of significant performance degradation. To address this trade-off between efficiency and performance, we propose HookMoE, a plug-and-play single-layer compensation framework that effectively restores performance using only a small post-training calibration set. Our method strategically inserts a lightweight trainable Hook module immediately preceding selected transformer blocks. Comprehensive evaluations on four popular MoE models, with an average performance degradation of only 2.5\% across various benchmarks, our method reduces the number of activated experts by more than 50\% and achieves a 1.42$\times$ inference speed-up during the prefill stage. Through systematic analysis, we further reveal that the upper layers require fewer active experts, offering actionable insights for refining dynamic expert selection strategies and enhancing the overall efficiency of MoE models.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: parameter-efficient-training, LLM Efficiency
Contribution Types: Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 1711
Loading