Abstract: Eye tracking has been a research tool for decades, providing insights into interactions, usability, and, more recently, gaze-enabled interfaces. Recent work has utilized consumer-grade and webcam-based eye tracking, but is limited by the need to repeatedly calibrate the tracker, which becomes cumbersome for use outside the lab. To address this limitation, we developed an unsupervised algorithm that maps gaze vectors from a webcam to fixation features used for user modeling, bypassing the need for screen-based gaze coordinates, which require a calibration process. We evaluated our approach using three datasets (N=377) encompassing different UIs (computerized reading, an Intelligent Tutoring System), environments (laboratory or the classroom), and a traditional gaze tracker used for comparison. Our research shows that webcam-based gaze features correlate moderately with eye-tracker-based features and can model user engagement and comprehension as accurately as the latter. We discuss applications for research and gaze-enabled user interfaces for long-term use in the wild.
Loading