Abstract: Deep neural networks excel at image classification on benchmarks like ImageNet, yet they remain vulnerable to adverse conditions, including environmental changes and sensor noise, such as lens blur or camera noise. Consequently, the study of these adverse noise corruptions has been extensive. At the same time, image blur, naturally introduced in optical systems, has been widely ignored as a threat to model robustness. In fact, Gaussian blur has even been considered as viable defense against adversarial attacks. In this work, we challenge the common perception of blur as a rather benign data corruption and study optics-driven, blur-based adversarial attacks. Specifically, we introduce Adverse Lens Corruption (ALC), an optical adversarial attack that identifies worst-case lens blurs, obtained by optimizing Zernike polynomial-based aberrations via gradient descent. Unlike traditional noise-based attacks, ALC provides a physically-grounded continuous search space. This enables the analysis of model robustness to optics-driven blur corruptions and complements existing noise and corruption benchmarks.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Haohan_Wang1
Submission Number: 7568
Loading