Linear-MoE: Linear Sequence Modeling Meets Mixture-of-Experts

ACL ARR 2025 February Submission3264 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Linear Sequence Modeling (LSM) like linear attention, state space modeling and linear RNNs, and Mixture-of-Experts (MoEs) have recently emerged as significant architectural improvements. In this paper, we introduce Linear-MoE, a production-level system for modeling and training large-scale models that integrate LSM with MoEs. Linear-MoE leverages the advantages of both LSM modules for linear-complexity sequence modeling and MoE layers for sparsely activation, aiming to offer high performance with efficient training. The Linear-MoE system comprises: 1) Modeling subsystem, which provides a unified framework supporting all instances of LSM. and 2) Training subsystem, which facilitates efficient training by incorporating advanced parallelism, particularly Sequence Parallelism designed for Linear-MoE models. Additionally, we explore hybrid models that combine Linear-MoE layers with standard Transformer-MoE layers with its Sequence Parallelism to further enhance model flexibility and performance. Evaluations on two model series, A0.3B-2B and A1B-7B, demonstrate Linear-MoE achieves efficiency gains while maintaining competitive performance on various benchmarks, showcasing its potential as a next-generation foundational model architecture. Code: \url{https://anonymous.4open.science/r/Linear-MoE-AD77}
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: Linear sequence modeling, mixture of experts, sequence parallelism, hybrid models
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: English,Chinese
Submission Number: 3264
Loading