Keywords: Mixture of Expert, Parameter efficient fine-tuning, Large Language Model
TL;DR: A unified framework for PEFT in MoE models, and PERFT as a family of effective adaptation strategies.
Abstract: The Mixture-of-Experts (MoE) paradigm has emerged as a promising approach for scaling transformer-based large language models (LLMs) with improved resource utilization.
However, efficiently fine-tuning MoE LLMs remains largely underexplored.
Inspired by recent works on Parameter-Efficient Fine-Tuning (PEFT), we present a unified framework for integrating PEFT modules into MoE LLMs.
Our framework, aligned with the core mechanisms of MoE, encompasses a comprehensive set of design dimensions including various functional and composition strategies.
By combining the key design choices within our framework, we introduce Parameter-Efficient Routed Fine-Tuning (PERFT) as a flexible and scalable family of PEFT strategies tailored for MoE LLMs.
Extensive experiments adapting OLMoE-1B-7B and Mixtral-8×7B for commonsense and arithmetic reasoning tasks demonstrate the effectiveness, scalability, and intriguing dynamics of PERFT.
Additionally, we provide empirical findings for each specific design choice to facilitate better application of MoE and PEFT.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4518
Loading