Keywords: Neural representations, topology, pretrained models, datasets
TL;DR: We introduce MAPS, a dataset of controlled 3D scene transformations, and show that pretrained vision models capture their underlying topological structure.
Abstract: Neural activity exhibits low-dimensional organization across brain areas, behaviors, and species. While prior work has shown that behaviors shape the geometry and topology of neural manifolds, the structure of sensory representations remains less understood. In this work, we use artificial neural networks to investigate the topology in the neural representations of continuous changes in visual features. We introduce MAPS (Manifolds of Artificial Parametric Scenes), a dataset of objects rendered in 3D with systematic parameter sweeps across hue, camera angle, lighting, and size. Each parameter defines a specific topology (e.g., a ring or an interval), with combined parameters yielding product manifolds. We show that, despite being trained on images without continuous transformations, pretrained vision models capture the topology of our controlled manifolds. We envision to expand MAPS with additional objects and transformations, and to move beyond topology toward analyzing the geometry of neural representations.
Submission Number: 120
Loading