Keywords: Adversarial robustness, vision, infant head-cam, temporal continuity
Abstract: Task-optimized convolutional neural networks are the most quantitatively accurate models of the primate visual system. Unlike humans, however, these models can easily be fooled by modifying their inputs with human-imperceptible image perturbations, resulting in poor adversarial robustness. Prior work showed that modifying a model's training objective or its architecture can improve its adversarial robustness. Another ingredient in building computational models of sensory cortex is the training dataset and, to our knowledge, its effect on a model's adversarial robustness has not been investigated. Motivated by observations that newborn chicks (Gallus gallus) develop more invariant visual representations when reared with more temporally-continuous visual experience, we here evaluate a model's adversarial robustness when it is trained on a more naturalistic dataset---a longitudinal video dataset collected from the perspective of infants (SAYCam; Sullivan et al., 2020). By evaluating the adversarial robustness of models on $26$-way classification of a set of annotated video frames from this dataset, we find that, across multiple objective functions, models that have been pre-trained on SAYCam video frames are more adversarially robust than those that have been pre-trained on ImageNet. Our results suggest that to build models that are more adversarially robust, additional efforts should be made in curating datasets that are more similar to the natural image sequences and the visual experience that infants receive.
4 Replies
Loading