Local Geometry Constraints in V1 with Deep Recurrent AutoencodersDownload PDF

Published: 18 Oct 2022, Last Modified: 05 May 2023SVRHM PosterReaders: Everyone
Keywords: Locality, Manifold Learning, Graph Laplacian, Phase Symmetry
TL;DR: Deep recurrent sparse autoencoders learn brain-like Gabor filters when adding additional regularization that captures physical constraints of V1
Abstract: Sparse coding is a pillar of computational neuroscience, learning filters that well-describe the sensitivities of mammalian simple cell receptive fields (SCRFs) in a least-squares sense. The overall distribution of SCRFs of purely sparse models, however, fail to match those found experimentally. A number of subsequent updates to overcome this problem limit the types of sparsity or else disregard the dictionary learning framework entirely. We propose a weighted $\ell_1$ penalty (WL) that maintains a qualitatively new form of sparsity, one that produces receptive field profiles that match those found in primate data by more explicitly encouraging artificial neurons to use a similar subset of dictionary basis functions. The mathematical interpretation of the penalty as a Laplacian smoothness constraint implies an early-stage form of clustering in primary cortex, suggesting how the brain may exploit manifold geometry while balancing sparse and efficient representations.
Supplementary Material: zip
5 Replies

Loading