Rapid Plug-in Defenders

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Rapid Plug-in Defenders, Few-shot Adversarial Training, Adversarial Examples and Defenses
TL;DR: Pre-trained Transformers as Rapid Plug-in Defenders
Abstract: In the realm of daily services, the deployment of deep neural networks underscores the paramount importance of their reliability. However, the vulnerability of these networks to adversarial attacks, primarily evasion-based, poses a concerning threat to their functionality. Common methods for enhancing robustness involve heavy adversarial training or leveraging learned knowledge from clean data, both necessitating substantial computational resources. This inherent time-intensive nature severely limits the agility of large foundational models to swiftly counter adversarial perturbations. To address this challenge, this paper focuses on the \textbf{Ra}pid \textbf{P}lug-\textbf{i}n \textbf{D}efender (\textbf{RaPiD}) problem, aiming to rapidly counter adversarial perturbations without altering the deployed model. Drawing inspiration from the generalization and the universal computation ability of pre-trained transformer models, we propose a novel method termed \textbf{CeTaD} (\textbf{C}onsidering Pr\textbf{e}-trained \textbf{T}ransformers \textbf{a}s \textbf{D}efenders) for RaPiD, optimized for efficient computation. \textbf{CeTaD} strategically fine-tunes the normalization layer parameters within the defender using a limited set of clean and adversarial examples. Our evaluation centers on assessing \textbf{CeTaD}'s effectiveness, transferability, and the impact of different components in scenarios involving one-shot adversarial examples. The proposed method is capable of rapidly adapting to various attacks and different application scenarios without altering the target model and clean training data. We also explore the influence of varying training data conditions on \textbf{CeTaD}'s performance. Notably, \textbf{CeTaD} exhibits adaptability across differentiable service models and proves the potential of continuous learning.
Primary Area: Safety in machine learning
Submission Number: 13895
Loading