Evaluating Peripheral Vision as an Input Transformation to Understand Object Detection Model Behavior

Published: 27 Oct 2023, Last Modified: 20 Nov 2023Gaze Meets ML 2023 PosterEveryoneRevisionsBibTeX
Submission Type: Extended Abstract
Keywords: peripheral vision, object detection, image transformation, dataset
TL;DR: we simulate peripheral vision in deep neural networks to understand object detection behavior
Abstract: Incorporating aspects of human gaze into deep neural networks (DNNs) has been used to both improve and understand the representational properties of models. We extend this work by simulating peripheral vision -- a key component of human gaze -- in object detection DNNs. To do so, we modify a well-tested model of human peripheral vision (the Texture Tiling Model, TTM) to transform a subset of the MS-COCO dataset to mimic the information loss from peripheral vision. This transformed dataset enables us to (1) evaluate the performance of a variety of pre-trained DNNs on object detection in the periphery, (2) train a Faster-RCNN with peripheral vision input, and (3) test trained DNNs for corruption robustness. Our results show that stimulating peripheral vision helps us understand how different DNNs perform under constrained viewing conditions. In addition, we show that one benefit of training with peripheral vision is increased robustness to geometric and high severity image corruptions, but decreased robustness to noise-like corruptions. Altogether, our work makes it easier to model human peripheral vision in DNNs to understand both the role of peripheral vision in guiding gaze behavior and the benefits of human gaze in machine learning. Data and code will be released at https://github.com/RosenholtzLab/coco-periph-gaze
Submission Number: 10
Loading