MHASAN: Multi-Head Angular Self Attention Network for Spoof DetectionDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 13 Nov 2023ICPR 2022Readers: Everyone
Abstract: Despite being the most convenient method of biometric authentication, face recognition based systems on their own are vulnerable to different forms of face presentation attacks (PA). Successful PAs can allow attackers to maliciously gain access to sensitive information of an individual, thus making face recognition based authentication systems less secure. Hence, Presentation Attack Detection (PAD) or Face Anti-Spoofing (FAS) systems has become a crucial part of such pipelines. In recent years, pixel-wise approaches have shown a great deal of promise for the PAD task. However, we find that leveraging pixel-wise features without enforcing attentive margins on the image can deteriorate the performance of the model. In this paper, we look to combat this problem by computing local features of image patches in the context of the global features through a self-attention mechanism. More specifically, we employed the multi-head attention available in the Transformer architecture. Furthermore, we modify the traditional self-attention method so that the similarity scores are only influenced by the angular divergence of the key and query vectors. Experiments on OULUNPU and Replay-Mobile show that the modified Multi-Head Angular Self-Attention Network (MHASAN) based PAD framework achieves an Average Classification Error Rate (ACER) of 0.6%, and 2.50% on Protocols 2, and 4 of the OULU-NPU dataset respectively, which is extremely competitive when compared with state-of-the-art (SOTA).
0 Replies

Loading