Abstract: Merging multiple expert models offers a promising approach for performing multi-task learning without accessing their original data. Existing methods attempt to alleviate task conflicts by sparsifying task vectors or promoting orthogonality among them. However, they overlook the fundamental target of model merging: the merged model performs as closely as possible to task-specific models on respective tasks. We find these methods inevitably discard task-specific information that, while causing conflicts, is crucial for performance. Based on our findings, we frame model merging as a constrained optimization problem ($\textit{i.e.}$, minimizing the gap between the merged model and individual models, subject to the constraint of retaining shared knowledge) and solve it via adaptive projective gradient descent. Specifically, we align the merged model with individual models by decomposing and reconstituting the loss function, alleviating conflicts through $\textit{data-free}$ optimization of task vectors. To retain shared knowledge, we optimize this objective by projecting gradients within a $\textit{shared subspace}$ spanning all tasks. Moreover, we view merging coefficients as adaptive learning rates and propose a task-aware, training-free strategy. Experiments show that our plug-and-play approach consistently outperforms previous methods, achieving state-of-the-art results across diverse architectures and tasks in both vision and NLP domains.
Lay Summary: Merging multiple models presents a promising approach to multi-task learning. We posit that the fundamental objective of model merging is for the merged model to perform as closely as possible to the task-specific models on their respective tasks. Building on this insight, we formulate model merging as a constrained optimization problem—minimizing the performance gap between the merged model and individual models while preserving shared knowledge—and solve it using adaptive projective gradient descent. Our method is entirely data-free. Experiments demonstrate that this plug-and-play approach consistently achieves state-of-the-art results across diverse architectures and tasks in both vision and NLP domains.
Primary Area: General Machine Learning->Transfer, Multitask and Meta-learning
Keywords: Model Merging, Task Vector, Gradient Projection
Flagged For Ethics Review: true
Submission Number: 307
Loading