The Natural Tendency of Feed Forward Neural Networks to Favor Invariant UnitsDownload PDF

Published: 02 Oct 2019, Last Modified: 05 May 2023Real Neurons & Hidden Units @ NeurIPS 2019 PosterReaders: Everyone
TL;DR: Rectification in deep neural networks naturally leads them to favor an invariant representation.
Keywords: deep networks, invariance, neuroscience
Abstract: A central goal in the study of the primate visual cortex and hierarchical models for object recognition is understanding how and why single units trade off invariance versus sensitivity to image transformations. For example, in both deep networks and visual cortex there is substantial variation from layer-to-layer and unit-to-unit in the degree of translation invariance. Here, we provide theoretical insight into this variation and its consequences for encoding in a deep network. Our critical insight comes from the fact that rectification simultaneously decreases response variance and correlation across responses to transformed stimuli, naturally inducing a positive relationship between invariance and dynamic range. Invariant input units then tend to drive the network more than those sensitive to small image transformations. We discuss consequences of this relationship for AI: deep nets naturally weight invariant units over sensitive units, and this can be strengthened with training, perhaps contributing to generalization performance. Our results predict a signature relationship between invariance and dynamic range that can now be tested in future neurophysiological studies.
5 Replies

Loading