Abstract: This paper addresses the real-time tracking of head and facial motion in monocular image sequences using 3D deformable models. It introduces two methods. The first method only tracks the 3D head pose using a cascade of two stages: the first stage utilizes a robust feature-based pose estimator associated with two consecutive frames, the second stage relies on a Maximum a Posteriori inference scheme exploiting the temporal coherence in both the 3D head motions and facial textures. The facial texture is updated dynamically in order to obtain a simple on-line appearance model. The implementation of this method is kept simple and straightforward. In addition to the 3D head pose tracking, the second method tracks some facial animations using an Active Appearance Model search. Tracking experiments and performance evaluation demonstrate the robustness and usefulness of the developed methods that retain the advantages of both feature-based and appearance-based methods.
0 Replies
Loading