Abstract: Despite the recent significant progress in palmprint recognition, there are still challenges in scaling up this technology for real-world scenarios. One major challenge in developing practical, highly accurate recognition models is the shortage of comprehensive public datasets that can be used to evaluate performance at extremely low false accept rates (FAR). Furthermore, obtaining high-precision recognition models is greatly hindered by pattern variance, a notable challenge with the palmprint modality given the current technology pipeline. To address the above problems, we first collect a palmprint dataset, WebPalm, that contains the largest number of identities as well as images that have been disclosed so far. To reduce pattern variance, we propose RegPalm, a novel framework that unifies palmprint orientations (UPO) and learns pairwise spatial registration of palmprints (PPR) in an end-to-end manner. UPO harmonizes the pattern variance between left and right orientations, hence enhancing the network’s perceptual capabilities. PPR decreases both inter-class and intra-class pattern variance to improve the model’s ability to recognize hard examples. RegPalm reinforces the model by discriminating subtle palmprint features, thereby improving its performance under extremely low FAR. RegPalm not only surpasses the current state-of-the-art by 9.3 percentage points (pp) and 12.2 pp in TAR@FAR=1e-6 under the 1:1 and 1:3 open-set protocols, respectively, but also consistently achieves a 16 pp improvement in TAR@FAR=1e-9 on the WebPalm benchmark. The experimental results fully reveal the practicability and superiority of RegPalm in the real world.
External IDs:dblp:journals/tifs/ZhongCWZFM25
Loading