Collective Model Intelligence Requires Compatible Specialization

Published: 06 Mar 2025, Last Modified: 05 Apr 2025MCDC @ ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: MoE, Model Merging, CKA
TL;DR: Model merging via feature averaging underperforms due to representation mismatch. We propose routing-based combination with defined input-output spaces as a path toward effective collective model intelligence.
Abstract: In this work, we explore the limitations of combining models by averaging intermediate features, referred to as $\textit{model merging}$, and propose a new direction for achieving collective model intelligence through what we call $\textit{compatible specialization}$. Current methods for model merging, such as parameter and feature averaging, struggle to effectively combine specialized models due to representational divergence during fine-tuning. As models specialize to their individual domains, their internal feature representations become increasingly incompatible, leading to poor performance when attempting to merge them for new tasks. We analyze this phenomenon using centered kernel alignment (CKA) and show that as models specialize, the similarity in their feature space structure diminishes, hindering their capacity for collective use. To address these challenges, we investigate routing-based merging strategies, which offer more flexible methods for combining specialized models by dynamically routing across different layers. This allows us to improve on existing methods by combining features from multiple layers rather than relying on fixed, layer-wise combinations. However, we find that these approaches still face limitations when layers within models are representationally incompatible. Our findings highlight the importance of designing new approaches for model merging that operate on well-defined input and output spaces, similar to how humans communicate through language rather than intermediate neural activations.
Submission Number: 10
Loading