Keywords: neural encoding, computational neuroscience, video understanding, deep learning
Abstract: Extensive literature has drawn comparisons between recordings of biological neurons in the brain and deep neural networks. This comparative analysis aims to advance and interpret deep neural networks and enhance our understanding of biological neural systems. However, previous work did not consider the time aspect and how video and dynamics (e.g., motion) modelling in deep networks relate to the biological neural systems within a large-scale comparison. Towards this end, we propose the first large-scale study focused on comparing video understanding models with respect to the visual cortex recordings using video stimuli. The study encompasses around half a million regression fits, examining image vs. video understanding, convolutional vs. transformer-based and fully vs. self-supervised models. We show that video understanding are better than image understanding models, convolutional models are better in the early-mid visual cortex regions than transformer based ones except for multiscale transformers, and that two-stream models are better than single stream. Furthermore, we propose a novel neural encoding scheme that is built on top of the best performing video understanding models, while incorporating inter-intra region connectivity across the visual cortex. Our neural encoding leverages the dynamics modelling from video stimuli, through utilizing two-stream networks and multiscale transformers, while taking connectivity priors into consideration. Our results show that merging both intra and inter-region connectivity priors increases the encoding performance over each one of them standalone or no connectivity priors. It also shows the necessity for encoding dynamics to fully benefit from such connectivity priors.
Submission Number: 17
Loading