Keywords: adversarial attack, gradient-feature alignment, deep defense, deep fool
Abstract: Deep neural networks are known to be vulnerable to adversarial perturbations—small, carefully crafted inputs that lead to incorrect predictions. In this paper, we propose \textit{DeepDefense}, a novel defense framework that applies Gradient-Feature Alignment (GFA) regularization across multiple layers to suppress adversarial vulnerability. By aligning input gradients with internal feature representations, DeepDefense promotes a smoother loss landscape in tangential directions, thereby reducing the model's sensitivity to adversarial noise.
We provide theoretical insights into how adversarial perturbation is decomposed into radial and tangential components and demonstrate that alignment suppresses loss variation in tangential directions, where most attacks are effective. Empirically, our method achieves significant improvements in robustness across both gradient-based and optimization-based attacks. For example, on CIFAR-10, CNN models trained with DeepDefense outperform standard adversarial training by up to 15.2\% under APGD attacks and 24.7\% under FGSM attacks. Against optimization-based attacks like DeepFool and EADEN, DeepDefense requires 20-30 times higher perturbation magnitudes to cause misclassification, indicating stronger decision boundaries and flatter loss landscape. Our approach is architecture-agnostic, simple to implement, and highly effective, offering a promising direction for improving the adversarial robustness of deep learning models.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 2488
Loading