Brain-inspired predictive coding dynamics improve the robustness of deep neural networksDownload PDF

Published: 03 Nov 2020, Last Modified: 05 May 2023SVRHM@NeurIPS PosterReaders: Everyone
Keywords: predictive coding, neuroscience, robustness, machine learning, deep learning
TL;DR: We incorporate recurrent feedback connections based on predictive coding principles, as formulated in neuroscience, into feedforward machine learning models and show that it improves their robustness to natural and adversarial noise.
Abstract: Deep neural networks excel at image classification, but their performance is far less robust to input perturbations than human perception. In this work we address this shortcoming by incorporating brain-inspired recurrent dynamics in deep convolutional networks. We augment a pretrained feedforward classification model (VGG16 trained on ImageNet) with a “predictive coding” strategy: a framework popular in neuroscience for characterizing cortical function. At each layer of the hierarchical model, generative feedback “predicts” (i.e., reconstructs) the pattern of activity in the previous layer. The reconstruction errors are used to iteratively update the network’s representations across timesteps, and to optimize the network's feedback weights over the natural image dataset--a form of unsupervised training. We demonstrate that this results in a network with improved robustness compared to the corresponding feedforward baseline, not only against various types of noise but also against a suite of adversarial attacks. We propose that most feedforward models could be equipped with these brain-inspired feedback dynamics, thus improving their robustness to input perturbations.
4 Replies

Loading