Learning Invariance Manifolds of Visual Sensory NeuronsDownload PDF

Published: 07 Nov 2022, Last Modified: 05 May 2023NeurReps 2022 PosterReaders: Everyone
Keywords: neural invariances, invariance manifold, MEI, disentanglement, contrastive learning, visual cortex, CPPN
TL;DR: A data-driven method to identify, and parameterise, invariance manifolds in visual sensory neurons.
Abstract: Robust object recognition is thought to rely on neural mechanisms that are selective to complex stimulus features while being invariant to others (e.g., spatial location or orientation). To better understand biological vision, it is thus crucial to characterize which features neurons in different visual areas are selective or invariant to. In the past, invariances have commonly been identified by presenting carefully selected hypothesis-driven stimuli which rely on the intuition of the researcher. One example is the discovery of phase invariance in V1 complex cells. However, to identify novel invariances, a data-driven approach is more desirable. Here, we present a method that, combined with a predictive model of neural responses, learns a manifold in the stimulus space along which a target neuron's response is invariant. Our approach is fully data-driven, allowing the discovery of novel neural invariances, and enables scientists to generate and experiment with novel stimuli along the invariance manifold. We test our method on Gabor-based neuron models as well as on a neural network fitted on macaque V1 responses and show that 1) it successfully identifies neural invariances, and 2) disentangles invariant directions in the stimulus space.
10 Replies

Loading