The Unseen Adversaries: Robust and Generalized Defense Against Adversarial Patches

Published: 03 Feb 2026, Last Modified: 06 Feb 2026AISTATS 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The vulnerabilities of deep neural networks against singularities have raised serious concerns regarding their deployment in the physical world. One of the most prominent and highly impactful physical world adversarial perturbations is the attachment of patches on clean images, known as an adversarial patch attack. Similarly, natural noises such as Gaussian and Salt\&Pepper are highly prevalent in the real world. The current need arises from the above limitations and dearth of efforts in tackling these two singularities independently and in combination. In this research, for the first time, we have combined these two prominent singularities and proposed a novel dataset. Through this dataset, we have performed the benchmark study of singularity data point detection using the features of several convolutional neural networks. For classification, contrary to the popular choice of neural network-based parameter-tuning, we have utilized the traditional but effective machine learning classifiers. The extensive experiments in various in and out-of-distribution (OOD) singularities reveal several interesting findings about the effectiveness of classifiers and reflect that it is hard to defend adversaries if they are treated independently and inefficient classifiers are selected.
Submission Number: 2034
Loading