Saliency Driven Gaze Control for Autonomous PedestriansDownload PDF

Anonymous

19 Dec 2022 (modified: 05 May 2023)Submitted to GI 2023Readers: Everyone
Keywords: Gaze Control, Saliency, Virtual Agent Behaviour
TL;DR: We present two saliency-driven gaze models, a particle-gradient model and a saccade model.
Abstract: How and why an agent looks at its environment can inform its navigation, behaviour and interaction with the environment. A human agent's visual-motor system is complex and requires both an understanding of visual stimulus as well as adaptive methods to control and aim its gaze in accordance with goal-driven behaviour or intent. Drawing from observations and techniques in psychology, computer vision and human physiology, we present techniques to procedurally generate various types of gaze movements (head movements, saccades, microsaccades, and smooth pursuits) driven entirely by visual input in the form of saliency maps which represent pre-attentive processing of visual stimuli in order to replicate human gaze behaviour. Each method is designed to be agnostic to attention and cognitive processing, able to cover the nuances for each type of gaze movement, and desired intentional or passive behaviours. In combination with parametric saliency map generation, they serve as a foundation for modelling completely visually driven, procedural gaze in simulated human agents.
Track: Graphics
Accompanying Video: zip
4 Replies

Loading