$\mu$-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts

Published: 11 Jun 2025, Last Modified: 10 Jul 2025ES-FoMo IIIEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, Pruning, MoE, VLM
TL;DR: Low-complexity activation-aware pruning enables LLM online dynamic compression to realize a mixture of micro-experts
Abstract: To tackle the huge computational demand of large foundation models, activation-aware compression techniques without retraining have been introduced. However, since these rely on calibration data, domain shift may arise for unseen downstream tasks. With an efficient calibration, activation-aware pruning can be executed for every prompt adaptively, yet achieving reduced complexity at inference. We formulate it as a mixture of micro-experts, called $\mu$-MoE. Several experiments demonstrate that $\mu$-MoE can dynamically adapt to prompt-dependent structured sparsity.
Submission Number: 48
Loading