Structured Visual Landscape: Generating Preferred Representations in Multi-modal Biological and Artificial Neural Networks
Keywords: visual representation, preferred images, fMRI, EEG, generative models
TL;DR: Develop a structured visual representation landscape constrained by activations to generating preferred representations in biological and artificial neural networks
Abstract: Understanding how neurons responding to visual stimulus inputs is an important question in both deep learning and neuroscience. It has significant implications in enhancing the interpretability of black-box artificial neural networks and understanding the visual representation in biological neural networks. We proposed a structured visual representation landscape and design an activation score based prior that allows effectively regularizing the landscape with either activations from a brain region or units in neural networks. Our model Vis-Lens integrates a variational auto-encoder and diffusion model as an image generative model. It allows generation of natural realistic preferred images with directly modifying the activation-regularized latents, which avoids the tedious optimization procedure. We demonstrate the effectiveness of our framework in both artificial neural networks and biological neural networks with multi-modal response data derived from human visual cortex, including functional Magnetic Resonance Imaging (fMRI) and electroencephalography (EEG). Our framework outperforming state-of-the-art method on generating visual representations of those networks.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 15607
Loading