Human perception in computer vision

Ron Dekel

Nov 04, 2016 (modified: Jan 17, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: Computer vision has made remarkable progress in recent years. Deep neural network (DNN) models optimized to identify objects in images exhibit unprecedented task-trained accuracy and, remarkably, some generalization ability: new visual problems can now be solved more easily based on previous learning. Biological vision (learned in life and through evolution) is also accurate and general-purpose. Is it possible that these different learning regimes converge to similar problem-dependent optimal computations? We therefore asked whether the human system-level computation of visual perception has DNN correlates and considered several anecdotal test cases. We found that perceptual sensitivity to image changes has DNN mid-computation correlates, while sensitivity to segmentation, crowding and shape has DNN end-computation correlates. Our results quantify the applicability of using DNN computation to estimate perceptual loss, and are consistent with the fascinating theoretical view that properties of human perception are a consequence of architecture-independent visual learning.
  • TL;DR: Correlates for several properties of human perception emerge in convolutional neural networks following image categorization learning.
  • Conflicts: weizmann.ac.il
  • Keywords: Computer vision, Transfer Learning

Loading