Abstract: Artificial neural network models have emerged as promising mechanistic models of brain function, but there is little consensus on the correct method for comparing activation patterns in these models to brain responses. Drawing on recent work on mechanistic models in philosophy of neuroscience, we propose that a good comparison method should mimic the Inter-Animal Transform Class (IATC) - the strictest set of functions needed to accurately map neural responses between subjects in a population for the same brain area. Using the IATC, we can map bidirectionally between model responses and brain data, assessing how well the model can masquerade as a typical subject using the same kinds of transforms needed to map across animal subjects. We attempt to empirically identify the IATC in three settings: a simulated population of neural network models, a population of mouse subjects, and a population of human subjects. In each setting, we find that the empirically identified IATC enables accurate neural predictions while also achieving high specificity (i.e. distinguishing response patterns from different areas while strongly aligning same-area responses between subjects). In some settings, we find evidence that the IATC is shaped by specific aspects of the neural mechanism, such as the non-linear activation function. Using IATC-guided transforms, we obtain new evidence, convergent with previous findings, in favor of topographical deep neural networks (TDANNs) as models of the visual system.
Submission Number: 83
Loading