Generalizable Representation Geometry for Grating Stimuli in Primary Visual Cortex and Artificial Neural Networks
Keywords: Out-of-distribution generalization, Predictive learning, mouse primary visual cortex
TL;DR: Out-of-distribution generalization error correlates with the dimensionality and curvature of the neural representational manifold.
Abstract: Humans and other animals display a remarkable ability to generalize learned knowledge
to novel domains (out-of-distribution, OOD). This capability is thought to depend on
the format of neural population representations, but which geometrical properties support
OOD generalization—and which learning objectives give rise to them—remain unclear.
We analyze mouse V1 population responses to static grating orientations and show that a
decoder trained within a restricted orientation domain can generalize to held-out domains.
The quality of generalization correlates with both the dimensionality and curvature of the
underlying representation manifold. Notably, similar OOD-generalizable geometry emerges
in a deep neural network (PredNet) trained for next-frame prediction on natural videos.
These results identify possible geometric properties underpinning OOD generalization and
suggest predictive learning as a plausible route to acquire generalizable representational
geometry.
Submission Number: 65
Loading