Keywords: Backdoor attacks, backdoor defense, model merging
TL;DR: We introduce a module-switching defense that outperforms weight averaging in mitigating backdoor attacks; its effectiveness is supported by theory on synthetic networks and strong empirical evidence on transformer models.
Abstract: Backdoor attacks pose a serious threat to deep neural networks (DNNs), allowing adversaries to implant triggers for hidden behaviors in inference. Defending against such vulnerabilities is especially difficult in the post-training setting, since end-users lack training data or prior knowledge of the attacks. Model merging offers a cost-effective defense; however, latest methods like weight averaging (WAG) provide reasonable protection when multiple homologous models are available, but are less effective with fewer models and place heavy demands on defenders. We propose a module-switching defense (MSD) for disrupting backdoor shortcuts. We first validate its theoretical rationale and empirical effectiveness on two-layer networks, showing its capability of achieving higher backdoor divergence than WAG, and preserving utility. For deep models, we evaluate MSD on Transformer architectures and design an evolutionary algorithm to optimize fusion strategies with selective mechanisms to identify the most effective combinations. Experiments shown that MSD achieves stronger defense with fewer models in practical settings, and even under an underexplored case of collusive attacks among multiple models--where some models share same backdoors--switching strategies by MSD deliver superior robustness against diverse attacks.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 15870
Loading