Beyond Parameter Count: Implicit Bias in Soft Mixture of Experts

Published: 19 Jun 2025, Last Modified: 19 Jun 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The traditional viewpoint on Sparse Mixture of Experts (MoE) models is that instead of training a single _large_ expert, which is computationally expensive, we can train many _small_ experts. The hope is that if the total parameter count of the small experts equals that of the singular large expert, then we retain the representation power of the large expert while gaining computational tractability and promoting expert specialization. The recently introduced Soft MoE replaces the Sparse MoE's discrete routing mechanism with a differentiable gating function that smoothly mixes tokens. While this smooth gating function successfully mitigates the various training instabilities associated with Sparse MoE, it is unclear whether it induces implicit biases that affect Soft MoE's representation power or potential for expert specialization. We prove that Soft MoE with a single arbitrarily powerful expert cannot represent simple convex functions. This justifies that Soft MoE's success cannot be explained by the traditional viewpoint of many small experts collectively mimicking the representation power of a single large expert, and that multiple experts are actually _necessary_ to achieve good representation power (even for a fixed total parameter count). Continuing along this line of investigation, we introduce a notion of expert specialization for Soft MoE, and while varying the number of experts yet fixing the total parameter count, we consider the following (computationally intractable) task. Given any input, how can we discover the expert subset that is specialized to predict this input's label? We empirically show that when there are many small experts, the architecture is implicitly biased in a fashion that allows us to efficiently approximate the specialized expert subset. Our method can be easily implemented to potentially reduce computation during inference. For example, using our method on ImageNet, one can perform inference using only $1/8$ of the experts and still retain $99$% of the test accuracy of using all experts.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/YoungseogChung/beyond-param-count
Supplementary Material: zip
Assigned Action Editor: ~Pablo_Samuel_Castro1
Submission Number: 4097
Loading