When and where do feed-forward neural networks learn localist representations?

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Parallel distributed processing models of neural networks (NN) suggest that there can be no interpretable localist codes in a neural network, preventing researchers from looking for them and implying that they are biologically implausible. However, recent results from psychology, neuroscience and deep-learning neural networks have shown the occasional existence of local codes emerging from PDP models. In this paper, we undertake the first systematic survey of when local codes emerge in a feed-forward neural network (used as a model for a single layer of a deep network), using generated input and output data with known qualities. We find that the number of local codes that emerge from a NN follows a well-defined distribution across the number of hidden layer neurons, with a peak determined by the size of input data, number of examples presented and the sparsity of input data. Using a 1-HOT output code drastically decreases the number of local codes on the hidden layer, suggesting that localist encoding will be found at the deeper levels of a deep neural network. The number of emergent local codes increases with the percentage of dropout applied to the hidden layer, suggesting that the localist encoding may offer a resilience to noisy networks. This data suggests that localist coding can emerge from PDP networks, and thus might be relevant to the functioning of deep networks and the brain, and that psychological models based on local codes should not be dismissed out of hand.
  • TL;DR: Local codes have been found in feed-forward neural networks
  • Keywords: localist, pdp, neural network, representation, psychology, cognition

Loading