Interpretable Adverse Lens Corruptions

Published: 26 Sept 2025, Last Modified: 15 Oct 2025L2S PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adversarial Attack, Adversarial Training, Optical Adverstral Attack
TL;DR: Deep neural networks, while excellent at image classification, are vulnerable to realistic lens blurs, which a new optical adversarial attack, Adverse Lens Corruption (ALC), finds the most challenging of these blurs for current models.
Abstract: Deep neural networks excel at image classification on benchmarks like ImageNet, yet they remain vulnerable to adverse conditions, including environmental changes and sensor noise, such as lens blur or camera noise. Consequently, the study of these adverse noise corruptions has been extensive. At the same time, each camera lens is expected to have its characteristic blur: while manufacturers optimize the lens to have fewer aberrations, a compromise between lens quality and available budget as well as physical constraints has to be made. However, a study of adverse but realistic blur corruptions is currently still amiss. We introduce Adverse Lens Corruption (ALC), an optical adversarial attack that identifies worst‑case lens blurs. ALC works by optimizing Zernike polynomial-based aberrations via gradient descent. This method enables direct optical analysis and complements existing noise and corruption benchmarks, revealing which lens designs pose the greatest challenge to current models.
Submission Number: 13
Loading