TTAGaze: Self-Supervised Test-Time Adaptation for Personalized Gaze Estimation

Published: 01 Jan 2024, Last Modified: 15 May 2025IEEE Trans. Circuits Syst. Video Technol. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we address the problem of personalized gaze estimation. Due to the anatomical differences between individuals, current personalized gaze models often rely on fine-tuning or fully-supervised methods with labeled calibration samples, which may not be practical in real-world applications. To tackle this limitation, we propose an approach called Self-Supervised Test-Time Adaptation for Personalized Gaze Estimation (TTAGaze), which enables adaptation with small unlabeled data at test time. Our goal is to develop a gaze estimation model specifically adapted to a target person using only a few unlabeled images. We call this setting as unsupervised few-shot personalized adaptation in gaze estimation, which is more aligned with real-world scenarios compared to existing approaches. Additionally, Our approach leverages self-supervised learning and meta-learning. The model consists of the main task (gaze estimation) and a self-supervised auxiliary task. During training, the two task are trained using a coupled method. At test time, adaptation is achieved by optimizing the self-supervised loss adapted to an unseen person with a few unlabeled data. The model parameters are learned via model-agnostic meta-learning (MAML) to facilitate effective unsupervised few-shot personalized adaptation in gaze estimation. Experimental results demonstrate that the proposed method outperforms alternative approaches on several widely-used benchmark datasets.
Loading