TL;DR: We derive scaling laws for memory- and compute-constrained Mixture of Experts language models.
Abstract: Mixture of Experts (MoE) architectures have significantly increased computational efficiency in both research and real-world applications of large-scale machine learning models. However, their scalability and efficiency under memory constraints remain relatively underexplored. In this work, we present joint scaling laws for dense and MoE models, incorporating key factors such as the number of active parameters, dataset size, and the number of experts. Our findings provide a principled framework for selecting the optimal MoE configuration under fixed memory and compute budgets. Surprisingly, we show that MoE models can be more memory-efficient than dense models, contradicting conventional wisdom. Extensive empirical validation confirms the theoretical predictions of our scaling laws. These results offer actionable insights for designing and deploying MoE models in practical large-scale training scenarios.
Lay Summary: Mixture of Experts (MoE) models are LLMs that consist of multiple smaller models, but use only some of them each time they are used. It makes these models computationally efficient. However, their memory usage is less understood. This paper introduces an equation, showing how MoE models can, surprisingly, be more memory-efficient than traditional models. Extensive experiments confirm these findings, offering clear guidelines for choosing the best MoE configurations within practical memory and compute limits.
Link To Code: https://huggingface.co/maciek-pioro/joint-moe-scaling-laws
Primary Area: Deep Learning->Large Language Models
Keywords: mixture of experts, scaling laws, llm
Submission Number: 4618
Loading