Weakly-supervised & Uncertainty-aware 3D Gaze Estimation with Geometry-guided Constraints

28 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: 3D geometry; gaze representation learning, 3D gaze estimation
Abstract: 3D eye gaze estimation from monocular images remains to be a challenging task due to the model sensitivity to illumination, occlusion and head pose changes. As the growing interests and demand in in-the-wild 3D gaze estimation under unconstrained environments, the generalization ability has been considered as a crucial performance metric of 3D gaze estimation models. In this work, we present UGaze-Geo, an uncertainty-aware weakly-supervised framework for 3D gaze estimation. We leverage the general knowledge of human eyeball anatomy and develop multiple geometric constraints. The proposed geometrical constraints contains two types, where the first type is formulated by constructing the mapping function from anatomical 3D eyeball parameters to eye appearance features (eyelid \& iris landmarks). The second type of constraints is based on the relationship among head rotation, eyeball rotation and gaze, where we learn a variable that describes "relative eyeball rotation" conditioned on current head pose. Both type of constraints are free of gaze labels and are general to any subjects and environmental conditions. We formulate these constraints as loss functions in a probabilistic framework. We evaluate the UGaze-Geo framework on within-domain and four cross-domain gaze estimation tasks to validate the effectiveness of each constraint and the advantage of performing probabilistic gaze estimation. Experimental results indicate that our model achieves SOTA performances on different dataset.
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13611
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview