Advancements in Attention Decoding Using the MOET Dataset: A Comparative Study

NeurIPS 2023 Workshop Gaze Meets ML Submission20 Authors

08 Oct 2023 (modified: 27 Oct 2023)Submitted to Gaze Meets ML 2023EveryoneRevisionsBibTeX
Keywords: Gaze, Eye-Tracking, Deep Learning, Attentional Decoding, Visual Attention
TL;DR: We improve upon existing attention decoding methods and provide the first baseline for classifying attention loci.
Abstract: Eye movements serve as a valuable window into understanding the intricacies of the human mind and brain within the field of cognitive science. The analysis of eye movements and gaze fixations has yielded profound insights across various domains, including cognitive science, marketing, human-computer interaction, and human-robot interaction, providing a rich source of knowledge on diverse cognitive functions. A critical challenge in eye-tracking data analysis lies in deciphering a person's visual attention at each moment from their measured gaze behaviour, known as ``attention decoding." The majority of eye-tracking data analyses rely on labour-intensive manual coding of attentional states, a slow and error-prone endeavour. Recent advancements in machine learning offer potential automation but were hindered by the lack of publicly available labeled data for benchmarking. The Multiple Object Eye-Tracking (MOET) dataset, a recent release, overcomes this challenge, providing eye-tracking data from human participants observing dynamic visual scenes. We improve upon the existing end-to-end architecture and present several competitive algorithms for the task of attention decoding on the MOET dataset. We also present baseline results for the distinct but related task of labeling the attention loci.
Submission Type: Extended Abstract
Submission Number: 20
Loading