Sample-Level and Class-Level Adaptive Training for Face RecognitionDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 09 May 2023ICME 2022Readers: Everyone
Abstract: Marginal softmax loss function has been widely used for face recognition, where a universal angular margin is added between weight prototypes. However, this method neglects the differences between classes and samples. On class-level, the imbalanced real world training dataset requires different margin for the head and tail classes to equally squeeze each class's feature space. On the sample-level, it's also necessary to assign larger importance for the hard samples during training. In this paper, we address these two issues by combining two strategies: (1) explicitly assign the adaptive margin according to the image quantity so that the margin is enlarged for the tail classes; (2) semantically identify the ‘hard positive, samples and misclassified samples [1] to attach adaptive weights to increase the training emphasis on these samples. Extensive experiments on LFW/CFP/AGEDB and IJB-B/IJB-C show our method's effectiveness.
0 Replies

Loading