Keywords: backdoor attacks, backdoor defense, model merging, backdoor transfer, robustness
TL;DR: We show that representing backdoors as task vectors improves the understanding of backdoor robustness in model merging, with gains in proposed more resilient attacks and a way to build task-arithmetic-based defense.
Abstract: Model merging (MM) recently emerged as an effective method for combining large deep learning models. However, it poses significant security risks. Recent research shows that it is highly susceptible to backdoor attacks, which introduce a hidden trigger into a single fine-tuned model instance that allows the adversary to control the output of the final merged model at inference time. In this work, we propose a simple framework for understanding backdoor attacks by treating the attack itself as a task vector.
$\textit{Backdoor Vector (BV)}$ is calculated as the difference between the weights of a fine-tuned backdoored model and the clean fine-tuned one, which reveals new insights: analogies reflect backdoor transfer, addition injects attacks, and subtraction helps to remove them. Furthermore, we propose a novel method that enhances backdoor resilience through merging dubbed $\textit{Sparse Backdoor Vector (SBV)}$ that combines multiple attacks into a single one.
We identify the core vulnerability behind backdoor threats in MM: $\textit{inherent triggers}$ that exploit adversarial weaknesses in the base model. To counter this, we propose $\textit{Injection BV Subtraction (IBVS)}$ -- a vector-based, assumption-free defense against backdoors in MM. Our results show that SBVs surpass prior attacks by being the first to leverage merging for backdoor attacks, while IBVS provides a lightweight, general defense that remains effective even when the backdoor threat is entirely unknown.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 20951
Loading