Harder Task Needs More Experts: Dynamic Routing in MoE ModelsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: In this paper, we introduce a novel dynamic expert selection framework for Mixture of Experts (MoE) models, aiming to enhance computational efficiency and model performance by adjusting the number of activated experts based on input difficulty. Unlike traditional MoE approaches that rely on fixed Top-K routing, which activates a predetermined number of experts regardless of the input's complexity, our method dynamically selects experts based on the confidence level in expert selection for each input. This allows for a more efficient utilization of computational resources, activating more experts for complex tasks requiring advanced reasoning and fewer for simpler tasks. Through extensive evaluations, our dynamic routing method demonstrates significant improvements over conventional Top-2 routing across various benchmarks, achieving an average improvement of 0.7\% with less than 90\% activated parameters. Further analysis shows our model dispatches more experts to tasks requiring complex reasoning skills, like BBH, confirming its ability to dynamically allocate computational resources in alignment with the input's complexity. Our findings also highlight a variation in the number of experts needed across different layers of the transformer model, offering insights into the potential for designing heterogeneous MoE frameworks. We will open-source all the models we trained in this project.
Paper Type: long
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: Model analysis & interpretability, Publicly available software and/or pre-trained models
Languages Studied: English
Preprint Status: We plan to release a non-anonymous preprint in the next two months (i.e., during the reviewing process).
A1: yes
A1 Elaboration For Yes Or No: Limitation Section
A2: n/a
A3: yes
A3 Elaboration For Yes Or No: Abstract and Introduction
B: yes
B1: yes
B1 Elaboration For Yes Or No: 3 Experiment
B2: n/a
B3: n/a
B4: no
B4 Elaboration For Yes Or No: The data we used are widely used benchmarks.
B5: n/a
B6: n/a
C: yes
C1: yes
C1 Elaboration For Yes Or No: 3 Experiment
C2: yes
C2 Elaboration For Yes Or No: 3 Experiment
C3: no
C3 Elaboration For Yes Or No: We report the result of single run as training a model from scratch cost too much.
C4: yes
C4 Elaboration For Yes Or No: 3.1.3
D: no
E: yes
E1: no
E1 Elaboration For Yes Or No: We just use it for writing polish.
0 Replies

Loading