Keywords: neuroai, image domain, natural image statistics, representational similarity, representation learning, computational neuroscience
TL;DR: Training deep neural networks across different image domains affects how predictive their learned representations are of neural activity in mice.
Abstract: Biological visual systems have evolved around the efficient coding of natural image statistics in order to support recognition of complex visual patterns. Recent work has shown that deep neural networks are able to learn similar representations to those measured in visual areas in animals, suggesting they may serve as models for the brain. Varying the network architecture and loss function has been shown to modulate the biological similarity learned representations, however the extent to which this results from exposure to natural image statistics during training has not been fully characterized. Here, we use self-supervised learning to train neural network models across a range of data domains with different image statistics and evaluate the similarity of the learned representations to neural activity of the mouse visual cortex. We find that networks trained on different domains also exhibit different responses when shown held-out natural images. Furthermore, we find that the degree of biological similarity of the representations generally increases as a function of the naturalness of the data domain used for training. Our results provide evidence for the idea that the training data domain is an important component when modeling the visual system using deep neural networks.
Supplementary Material: zip