Synergistic Alignment-Based Domain Adaptation For Gaze Estimation

Published: 01 Jan 2024, Last Modified: 03 Jul 2025CCBR (1) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Most appearance-based gaze estimation methods aim to predict gaze direction using a single dataset setting, and tend to suffer performance degradation when crossing domains. The existing domain adaptation solutions either produce noisy pseudo labels or require extra computational resources for domain alignment. In this paper, we propose a synergistic alignment-based method for gaze estimation, enhancing the capability of domain alignment from the category level and feature level. First, we employ a hybrid network consisting of local convolutions and self-attention layers to generate refined pseudo-labels, which captures both local and global gaze related features. Second, resorting to Kullback-Leibler (KL) divergence, we derive a united loss to facilitate the alignment to reduce the mismatch of categories and feature distributions in an effective and efficient way. The experimental results on ETH-XGaze, Gaze360, EyeDiap and MPIIGaze datasets demonstrate that our proposed method achieves significant improvement on the gaze estimation compared with state-of-the-art domain adaptation methods.
Loading