Abstract: Spatial navigation involves the formation of coherent representations of a map-like space, while simultaneously tracking current location in a primarily unsupervised manner. Despite a plethora of neurophysiological experiments revealing spatially-tuned neurons across the mammalian neocortex and subcortical structures, it remains unclear how such representations are acquired in the absence of explicit allocentric targets. Drawing upon the concept of predictive learning, we utilize a biologically plausible learning rule which utilizes sensory-driven observations with internally-driven expectations and learns through a contrastive manner to better predict sensory information. The local and online nature of this approach is ideal for deployment to neuromorphic hardware for edge-applications. We implement this learning rule in a network with the feedforward and feedback pathways known to be necessary for spatial navigation. After training, we find that the receptive fields of the modeled units resemble experimental findings, with allocentric and egocentric representations in the expected order along processing streams. These findings illustrate how a local and self-supervised learning method for predicting sensory information can extract latent structure from the environment.
Loading