MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance
Abstract: Mixture-of-Experts (MoE) large language models (LLMs), which leverage dynamic routing and sparse activation to enhance efficiency and scalability, have achieved higher performance while reducing computational costs. However, these models face significant memory overheads, limiting their practical deployment and broader adoption. Post-training quantization (PTQ), a widely used method for compressing LLMs, encounters severe accuracy degradation and diminished generalization performance when applied to MoE models. This paper investigates the impact of MoE’s sparse and dynamic characteristics on quantization and identifies two primary challenges: (1) Inter-expert imbalance, referring to the uneven distribution of samples across experts, which leads to insufficient and biased calibration for less frequently utilized experts; (2) Intra-expert imbalance, arising from MoE's unique aggregation mechanism, which leads to varying degrees of correlation between different samples and their assigned experts. To address these challenges, we propose MoEQuant, a novel quantization framework tailored for MoE LLMs. MoEQuant includes two novel techniques: 1) Expert-Balanced Self-Sampling (EBSS) is an efficient sampling method that efficiently constructs a calibration set with balanced expert distributions by leveraging the cumulative probabilities of tokens and expert balance metrics as guiding factors. 2) Affinity-Guided Quantization (AGQ), which incorporates affinities between experts and samples into the quantization process, thereby accurately assessing the impact of individual samples on different experts within the MoE layer. Experiments demonstrate that MoEQuant achieves substantial performance gains (more than 10 points accuracy gain in the HumanEval for DeepSeekMoE-16B under 4-bit quantization) and boosts efficiency.
Lay Summary: Large language models (LLMs) have made remarkable progress, and Mixture-of-Experts (MoE) LLMs, with their dynamic routing and sparse activation, offer high performance at a reduced cost. However, when it comes to practical deployment, these models face a significant hurdle - high memory demands. Post-training quantization (PTQ), a common method for shrinking LLMs, doesn't work well with MoE LLMs, causing a sharp decline in accuracy.
The root of the problem lies in two imbalances within MoE LLMs. First, samples are unevenly distributed among experts. Some experts get a lot of samples during calibration, while others get too few, leading to inaccurate calibration. Second, the connection strength between samples and their assigned experts varies, but traditional PTQ methods overlook this.
To solve these issues, the researchers developed MoEQuant. It's like a smart toolkit for MoE LLMs. One part of it, Expert-Balanced Self-Sampling (EBSS), is a bit like a careful gardener. It uses the model's own ability to sample data and some guiding factors to create a calibration set where all experts are used evenly. This set is also a good match for the model's original data distribution. The other part, Affinity-Guided Quantization (AGQ), is like a precision - tuning device. It takes into account the connection strength between samples and experts during the quantization process. This makes the calculation of quantization errors more accurate and helps the model perform better.
Tests on different MoE LLMs, such as DeepSeek-MoE-16B, Qwen-MoE-14B, and Mixtral-8x7B, show that MoEQuant is very effective. It can improve the model's performance significantly, even when using low-bit quantization. It also boosts the model's generalization ability, especially in instruction - tuned models. What's more, MoEQuant can speed up inference and save a lot of memory, making it possible to run MoE LLMs on regular consumer - level devices. Overall, MoEQuant is a big step forward in making MoE LLMs more practical and accessible.
Link To Code: https://github.com/chenzx921020/MoEQuant
Primary Area: Deep Learning->Large Language Models
Keywords: Mixture of Expert, Large Language Models, Quantization
Submission Number: 658
Loading