Modeling human visual search: A combined Bayesian searcher and saliency map approach for eye movement guidance in natural scenesDownload PDF

Published: 03 Nov 2020, Last Modified: 05 May 2023SVRHM@NeurIPS OralReaders: Everyone
Keywords: visual search, eye movements, bayesian modeling, saliency model
TL;DR: Unified Bayesian model for visual search guided by saliency maps as prior information, validated for natural images, and adapted for them.
Abstract: Finding objects is essential for almost any daily-life visual task. Saliency models have been useful to predict fixation locations in natural images, but they provide no information about the time-sequence of fixations. Nowadays, one of the biggest challenges in the field is to go beyond saliency maps to predict a sequence of fixations related to a visual task, such as searching for a given target. Bayesian observer models have been proposed for this task, as they represent visual search as an active sampling process. Nevertheless, they were mostly evaluated on artificial images, and how they adapt to natural images remains largely unexplored. Here, we propose a unified Bayesian model for visual search guided by saliency maps as prior information. We validated our model with a visual search experiment in natural scenes recording eye movements. We show that, although state-of-the-art saliency maps are good to model bottom-up first impressions in a visual search task, their performance degrades to chance after the first fixations, when top-down task information is critical. Thus, we propose to use them as priors of Bayesian searchers. This approach leads to a behavior very similar to humans for the whole scanpath, both in the percentage of target found as a function of the fixation rank and the scanpath similarity, reproducing the entire sequence of eye movements.
5 Replies

Loading