Abstract: We train a feed forward neural network with increased robustness against adversarial attacks compared to conventional training approaches. This is achieved using a novel pre-trained building block based on a mean field description of a Boltzmann machine. On the MNIST dataset the method achieves strong adversarial resistance without data augmentation or adversarial training. We show that the increased adversarial resistance is correlated with the generative performance of the underlying Boltzmann machine.
Keywords: adversarial images, Boltzmann machine, mean field approximation
TL;DR: Generative pre-training with mean field Boltzmann machines increases robustness against adversarial images in neural networks.
10 Replies
Loading