Keywords: divisive normalization, deep artificial neural networks, robustness, local competition, computational neuroscience
Abstract: Convolutional Neural Networks (CNNs) embody priors about the visual world: locality, stationary statistics, translation invariance, and compositionality. Similarly, CNNs implement the retinotopy of visual cortex---nearby pixels are processed by nearby neurons. A common cortical computation not usually included in CNNs is divisive normalization. It has been shown that divisive normalization of Gabor filters results in more statistically independent responses (Simoncelli & Heeger, 1998). In this paper, we model divisive normalization as a simple computationally-efficient layer that can be inserted at any stage within deep artificial neural networks. Divisive normalization acts on neuronal sub-populations, whose parameters are initialized from a multivariate Gaussian distribution. This leads to the emergence of learned competition between both orientation-preferring and color-opponent cell types. Divisive normalization improves categorization performance, as well as robustness to perturbed images. Interestingly, in smaller networks, divisive normalization as a non-linear operation eliminates the need for a non-linear activation function like ReLU to drive performance.
Primary Area: applications to neuroscience & cognitive science
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9116
Loading