On sparse connectivity, adversarial robustness, and a novel model of the artificial neuronDownload PDF

Published: 29 Jul 2020, Last Modified: 05 May 2023VIPriors OralReaders: Everyone
Keywords: Sparse neural networks, unsupervised training, adversarial robustness
Supplementary Material: zip
Abstract: In this paper, we propose two closely connected methods to improve computational efficiency and stability against adversarial perturbations on contour recognition tasks: (a) a novel model of an artificial neuron, a "strong neuron," with inherent robustness against adversarial perturbations and (b) a novel constructive training algorithm that generates sparse networks with $O(1)$ connections per neuron. We achieved an impressive 10x reduction (compared with other sparsification approaches; 100x when compared with dense networks) in operations count. State-of-the-art stability against adversarial perturbations was achieved without any counteradversarial measures, relying on the robustness of strong neurons alone. Our network extensively uses unsupervised feature detection, with more than 95\% of operations being performed in its unsupervised parts. Less than 10.000 supervised FLOPs per class is required to recognize a contour (digit or traffic sign), which allows us to arrive to the conclusion that contour recognition is much simpler that was previously thought.
3 Replies

Loading