Decoding Attention from Gaze: A Benchmark Dataset and End-to-End ModelsDownload PDF

Published: 20 Oct 2022, Last Modified: 22 Oct 2023Gaze Meets ML 2022 PosterReaders: Everyone
Keywords: Gaze, Eye-Tracking, Deep Learning, Attentional Decoding
TL;DR: We provide a benchmark dataset and end-to-end models for decoding the locus of a person's attention from their gaze data.
Abstract: Eye-tracking has potential to provide rich behavioral data about human cognition in ecologically valid environments. However, analyzing this rich data is often challenging. Most automated analyses are specific to simplistic artificial visual stimuli with well-separated, static regions of interest, while most analyses in the context of complex visual stimuli, such as most natural scenes, rely on laborious and time-consuming manual annotation. This paper studies using computer vision tools for ``attention decoding'', the task of assessing the locus of a participant's overt visual attention over time. We provide a publicly available Multiple Object Eye-Tracking (MOET) dataset, consisting of gaze data from participants tracking specific objects, annotated with labels and bounding boxes, in crowded real-world videos, for training and evaluating attention decoding algorithms. We also propose two end-to-end deep learning models for attention decoding and compare these to state-of-the-art heuristic methods.
Submission Type: Full Paper
Travel Award - Academic Status: Undergraduate
Travel Award - Institution And Country: Indian Institute of Technology, Kharagpur, India
Travel Award - Low To Lower-middle Income Countries: Yes, my institution qualifies.
Camera Ready Latexfile: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2211.10966/code)
4 Replies

Loading