Abstract: Low-resolution (LR) face recognition (LRFR) tackles <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">tiny face images</i> detected from real-world surveillance camera footage, which are unconstrained and generally poor in quality. Owing to the absence of a million-scale labeled LR face dataset, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">identity-invariant</i> data augmentation (DA) transformations such as flipping, rotation, rescaling, etc., are applied to inflate the effective training examples with respect to the source identities for representation learning. Unfortunately, the identity-invariant property incurs additional intra-class disparity that impairs generalization performance. In this paper, we put forward a new means of DA strategy, termed <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">identity-extended</i> DA, that satisfies both <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">affinity</i> and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">diversity</i> requirements essential to DA. We instantiate an implicit identity-extended augmentation network, or simply IDEA-Net, to realize the proposed identity-extended DA for LRFR. More specifically, training an IDEA-Net instance augments the small-scale LR (query) face dataset with identity-extended (auxiliary) face examples implicitly in the representation space. We also introduce a calibrator to regulate the disordered representation space by refining the intra-class compactness and the inter-class separation. This diminishes the distribution shift between the original and the augmented examples (affinity) and increases the learning complexity (diversity). We substantiate that IDEA-Net renders a high affinity and diversity representation space. On the other hand, our experimental results on three real-world LR face datasets demonstrate that IDEA-Nets outperform the baselines and other counterparts trained without leveraging the identity-extended examples for LRFR.
0 Replies
Loading