Mitigating low-frequency bias: Feature recalibration and frequency attention regularization for adversarial robustness
Abstract: Ensuring the robustness of deep neural networks against adversarial attacks remains a fundamental challenge in computer vision. While adversarial training (AT) has emerged as a promising defense strategy, our analysis reveals a critical limitation: AT-trained models exhibit a bias toward low-frequency features while neglecting high-frequency components. This bias is particularly concerning as each frequency component carries distinct and crucial information: low-frequency features encode fundamental structural patterns, while high-frequency features capture intricate details and textures. To address this limitation, we propose High-Frequency Feature Disentanglement and Recalibration (HFDR), a novel module that strategically separates and recalibrates frequency-specific features to capture latent semantic cues. We further introduce frequency attention regularization to harmonize feature extraction across the frequency spectrum and mitigate the inherent low-frequency bias of AT. Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet-1K demonstrate that HFDR consistently enhances adversarial robustness. It achieves a 2.89 % gain on CIFAR-100 with WRN34-10, and improves robustness by 3.09 % on ImageNet-1K, with a 4.89 % gain on ViT-B against AutoAttack. These results highlight the method’s adaptability to both convolutional and transformer-based architectures. Code is available at https://github.com/KejiaZhang-Robust/HFDR.
External IDs:dblp:journals/nn/ZhangWCLL26
Loading