CoMoE: Contrastive Representation for Mixture-of-Experts in Parameter-Efficient Fine-tuning

ACL ARR 2025 May Submission613 Authors

14 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In parameter-efficient fine-tuning, mixture-of-experts (MoE), which involves specializing functionalities into different experts and sparsely activating them appropriately, has been widely adopted as a promising approach to trade-off between model capacity and computation overhead. However, current MoE variants fall short on heterogeneous datasets, ignoring the fact that experts may learn similar knowledge, resulting in the underutilization of MoE's capacity. In this paper, we propose Contrastive Representation for MoE (CoMoE), a novel method to promote modularization and specialization in MoE, where the experts are trained along with a contrastive objective by sampling from activated and inactivated experts in top-k routing. We demonstrate that such a contrastive objective recovers the mutual-information gap between inputs and the two types of experts. Experiments on several benchmarks and in multi-task settings demonstrate that CoMoE can consistently enhance MoE's capacity and promote modularization among the experts.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: contrastive learning, representation learning, multi-task learning
Contribution Types: Approaches to low-resource settings
Languages Studied: English
Keywords: contrastive learning, representation learning, multi-task learning
Submission Number: 613
Loading