Scaling Laws for Fine-Grained Mixture of Experts

ICLR 2024 Workshop ME-FoMo Submission41 Authors

Published: 04 Mar 2024, Last Modified: 02 May 2024ME-FoMo 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, MoE, Mixture of Experts, conditional computation, scaling laws, granularity
TL;DR: Presenting scaling laws for language models with fine-grained Mixture of Experts.
Abstract: Mixture of Experts (MoE) models have emerged as a primary solution for reducing the computational cost of Large Language Models. In this work, we analyze their scaling properties, highlighting certain arbitrary assumptions present in the existing literature. In particular, we introduce a new hyperparameter, granularity, which allows for the optimal adjustment of the size of experts. Subsequently, we present scaling laws for fine-grained MoE, taking into account the number of training tokens, model size, and granularity. Using these scaling laws, we derive the optimal training configuration for a given computational budget. Furthermore, in contrast with previous works, we demonstrate that the gap in efficiency between dense and MoE models grows as we scale up the model size and training budget.
Submission Number: 41
Loading