Keywords: Crowd Simulation, Gaze Behaviour, Saliency
TL;DR: We present two real-time customizable gaze models for animating gaze in virtual crowds based on pseudo-saliency maps.
Abstract: How and why an agent looks at its environment can inform its navigation, behaviour and interaction with the environment. A human agent's visual-motor system is complex and requires both an understanding of visual stimulus as well as adaptive methods to control and aim its gaze in accordance with goal-driven behaviour or intent. Drawing from observations and techniques in psychology, computer vision and human physiology, we present techniques to procedurally generate various types of gaze movements (head movements, saccades, microsaccades, and smooth pursuits) driven entirely by visual input in the form of saliency maps which represent pre-attentive processing of visual stimuli in order to replicate human gaze behaviour. Each method is designed to be agnostic to attention and cognitive processing, able to cover the nuances for each type of gaze movement, and desired intentional or passive behaviours. In combination with parametric saliency map generation, they serve as a foundation for modelling completely visually driven, procedural gaze in simulated human agents.
Track: Graphics
Accompanying Video: zip
Revision: Yes
Revision Reviewers: The same reviewers
Revision Letter: pdf
Summary Of Changes: - Based on the metareview, we addressed concerns outlined by the reviewers as follows:
Based on the comments of Reviewer BRy1:
- added brief explanation of why our model is preferable to using pyStar-FC alone at the end of Section 5, paragraph 3
- added a phrase highlighting the need for further comparison to other models to the last paragraph of Section 6.
- Grammatical errors have been corrected, and the sentence on page 7 has been adjusted.
Based on the comments of Reviewer PCWz:
- added a discussion of the parameter tuning in Section 5, paragraph 4
- added a brief possible justification for why our model is preferable over simpler approaches in Section 4.2.1, Paragraph 7
Based on the comments of Reviewer VhB1:
- added a clarification to Section 5, paragraph 2 that explains how pyStar-FC internally computes a saliency map before generating scanpaths
- An explanation of how we manually tuned our model to match pyStar-FC was added in Section 5, paragraph 4
- Some discussion of possible optimization frameworks and how they could overcome shortcomings in our model was added to Section 5, paragraph 4, and to Section 6, paragraph 1
- A brief explanation of the terms in equation 1 has been added to Section 4.1, paragraph 2.
3 Replies
Loading