Functionally localized representations contain distributed information: insight from simulations of deep convolutional neural networks
Abstract: Preferential activation to faces in the brain’s fusiform gyrus has
led to the proposed existence of a face module termed the
Fusiform Face Area (FFA) (Kanwisher et. al, 1997). However,
arguments for distributed, topographical object-form
representations in FFA and across visual cortex have been
proposed to explain data showing that FFA activation patterns
contain decodable information about non-face categories
(Haxby et. al, 2001; Hanson & Schmidt, 2011). Using two deep
convolutional neural network models able to perform human level object and facial recognition, respectively, we
demonstrate that both localized category representations
(LCRs) and high-level face-specific representations allow for
similar decoding accuracy between non-preferred visual
categories as between a preferred and non-preferred category.
Our results suggest that neuroimaging of a cortical “module”
optimized for face processing should yield significant
decodable information for non-face categories so long as
representations within the module are activated by non-face
stimuli
0 Replies
Loading