Does Face Recognition Error Echo Gender Classification Error?Download PDFOpen Website

2021 (modified: 17 Nov 2022)IJCB 2021Readers: Everyone
Abstract: This paper is the first to explore the question of whether images that are classified incorrectly by a face analytics algorithm (e.g., gender classification) are any more or less likely to participate in an image pair that results in a face recognition error. We analyze results from three different gender classification algorithms (one open-source and two commercial), and two face recognition algorithms (one open-source and one commercial), on image sets representing four demographic groups (African-American female and male, Caucasian female and male). For impostor image pairs, our results show that pairs in which one image has a gender classification error have a better impostor distribution than pairs in which both images have correct gender classification, and so are less likely to generate a false match error. For genuine image pairs, our results show that individuals whose images have a mix of correct and incorrect gender classification have a worse genuine distribution (increased false non-match rate) compared to individuals whose images consistently have correct gender classification. Thus, compared to images that generate correct gender classification, images with gender classification error have a lower false match rate and a higher false non-match rate.
0 Replies

Loading