Abstract: Mixture-of-experts (MoE) architecture is gaining increasing attention due to its unique properties and remarkable performance, especially for language tasks. By sparsely activating a subset of parameters for each token, MoE architecture could increase the model size without sacrificing computational efficiency, achieving a better trade-off between performance and training costs. However, the underlying mechanism of MoE still lacks further exploration, and its modularization degree remains questionable. In this paper, we make an initial attempt to understand the inner workings of MoE-based large language models (LLMs). Concretely, we comprehensively study the parametric and behavioural features of three recent MoE-based models and reveal some intriguing observations, including (1) Neurons act like fine-grained experts and (2) The router of MoE usually selects experts with larger output norms. (3) The expert diversity increases as the layer increases, while the last layer is an outlier. Based on these observations, we also provide suggestions for a broad spectrum of MoE practitioners, such as router design and expert allocation. We hope this work could shed light on future research on the MoE framework and other modular architectures.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: feature attribution, free-text/natural language explanations
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 5609
Loading