TL;DR: Scientific question: How can we use computational models to decipher and/or represent neuronal tuning properties, and standardize descriptions for comparisons across stimuli/tasks/species?
Abstract: Scientific question: How can we use computational models to decipher and/or represent neuronal tuning properties, and standardize descriptions for comparisons across stimuli/tasks/species? Introduction. At the onset of visual neuroscience, first there was light, and then came explanations. Working to stimulate neurons in primary visual cortex (V1) in 1958, Hubel and Wiesel projected white light onto cat retinas using a modified ophthalmoscope and a slide projector. They had glass- and brass slides with drawings and cutouts, using them to shape light into simple geometric patterns. Among their many findings, they established V1 neurons showed higher activity to specifically placed line segments – lines optimized in their location, length/width, color, and their rotation. The simplicity of these stimuli allowed for straightforward interpretations, specifically that V1 neurons signal contour orientation1. Observations vs. interpretations. There were five components that made these experiments canonical, and have been included in most subsequent studies of visual neuroscience: 1) A physical stimulus (e.g., light patterns on a projection screen/computer monitor). 2) A generative method for producing the physical stimuli (e.g., lines drawn manually on slides, variables for a computer graphics library, vectors in a generative adversarial network). 3) An experimenter-labeled stimulus space (e.g., orientation, categories) with a metric to order/cluster the physical stimuli (e.g., angular distance, perceptual similarity). 4) Neuronal activity associated with each stimulus (e.g., spike rates). 5) A potential mechanism suggesting how those tuning functions could arise from earlier inputs (e.g., spatially aligned projections from neurons in the midbrain [lateral geniculate nucleus]). The first and fourth components are observables (we refer to these as pixels and spikes). The second and third components are fundamentally entangled with the experimenter’s theories and interpretations (we refer to these as methods and spaces). The linchpin observation is that in this experimental design, the relationship between pixels and spikes is causal, but the relationship between spaces and spikes is correlational. Theoretically, there can be alternative explanations implicit in any given stimulus space which also affect neuronal activity — in an experiment, the subject’s brain only has access to the physical stimuli, not to the meaning attached to it.