Boosting dense SIFT descriptors and shape contexts of face images for gender recognitionDownload PDFOpen Website

2010 (modified: 10 Nov 2022)CVPR Workshops 2010Readers: Everyone
Abstract: In this paper, we propose a novel face representation in which a face is represented in terms of dense Scale Invariant Feature Transform (d-SIFT) and shape contexts of the face image. The application of the representation in gender recognition has been investigated. There are four problems when applying the SIFT to facial gender recognition. (1) There may be only a few keypoints that can be found in a face image due to the missing texture and poorly illuminated faces; (2) The SIFT descriptors at the keypoints (we called it sparse SIFT) are distinctive whereas alternative descriptors at non-keypoints (e.g. grid) could cause negative impact on the accuracy; (3) Relatively larger image size is required to obtain sufficient keypoints support the matching and (4) The matching assumes that the faces are properly registered. This paper addresses these difficulties using a combination of SIFT descriptors and shape contexts of face images. Instead of extracting descriptors around interest points only, local feature descriptors are extracted at regular image grid points that allow for a dense description of the face images. In addition, the global shape contexts of the face images are fused with the dense SIFT to improve the accuracy. AdaBoost is adopted to select features and form a strong classifier. The proposed approach is then applied to solve the problem of gender recognition. The experimental results on a large set of faces showed that the proposed method can achieve high accuracies even for faces that are not aligned.
0 Replies

Loading