Abstract: In this paper, we present a method for robust head pose estimation via carefully designed loss functions. We propose that exploiting the relationship between the predicted yaw, pitch, roll values and the features of a head pose estimation network is crucial to make robust predictions. With the aim of achieving the above criteria, we formulate novel loss functions that assure robustness and generalization of the network predictions. We report results on public datasets namely, AFLW2000-3D and BIWI demonstrating that the proposed method outperforms the state-of-the-art 2-d head pose estimation algorithms by a margin of up to 10%. We will release the source code at https://github.com/soni-H/lercpose/ upon acceptance of the paper.
External IDs:dblp:conf/icip/ChattopadhyaySA24
Loading