Keywords: Model Merging, Model Editing, Task Vector
TL;DR: We identified previously unrecognized detrimental factors in model merging and introduced DisTaC, a knowledge distillation-based approach designed to mitigate their effects.
Abstract: Model merging has emerged as an efficient and flexible paradigm for multi-task learning, with numerous methods being proposed in recent years.
However, these state-of-the-art techniques are typically evaluated on benchmark suites that are highly favorable to model merging, and their robustness in more realistic settings remains largely unexplored.
In this work, we first investigate the vulnerabilities of model-merging methods and pinpoint the source-model characteristics that critically underlie them.
Specifically, we identify two factors that are particularly harmful to the merging process: (1) disparities in task vector norms, and (2) the low confidence of the source models. To address this issue, we propose **DisTaC** (**Dis**tillation for **Ta**sk vector **C**onditioning), a novel method that pre-conditions these problematic task vectors before the merge. DisTaC leverages knowledge distillation to adjust a task vector's norm and increase source-model confidence while preserving its essential task-specific knowledge. Our extensive experiments demonstrate that by pre-conditioning task vectors with DisTaC, state-of-the-art merging techniques can successfully integrate models that exhibit these harmful traits, where they would otherwise fail, and achieve significant performance gains.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 10095
Loading