- Original Pdf: pdf
- Keywords: anomalous pattern detection, subset scanning, node activations, adversarial noise
- TL;DR: We efficiently find a subset of images that have higher than expected activations for some subset of nodes. These images appear more anomalous and easier to detect when viewed as a group.
- Abstract: This work views neural networks as data generating systems and applies anomalous pattern detection techniques on that data in order to detect when a network is processing a group of anomalous inputs. Detecting anomalies is a critical component for multiple machine learning problems including detecting the presence of adversarial noise added to inputs. More broadly, this work is a step towards giving neural networks the ability to detect groups of out-of-distribution samples. This work introduces ``Subset Scanning methods from the anomalous pattern detection domain to the task of detecting anomalous inputs to neural networks. Subset Scanning allows us to answer the question: "``Which subset of inputs have larger-than-expected activations at which subset of nodes?" Framing the adversarial detection problem this way allows us to identify systematic patterns in the activation space that span multiple adversarially noised images. Such images are ``"weird together". Leveraging this common anomalous pattern, we show increased detection power as the proportion of noised images increases in a test set. Detection power and accuracy results are provided for targeted adversarial noise added to CIFAR-10 images on a 20-layer ResNet using the Basic Iterative Method attack.
- Code: https://github.com/hikayifix/adversarialdetector