PointFaceFormer: Local and Global Attention Based Transformer for 3D Point Cloud Face Recognition

Published: 01 Jan 2024, Last Modified: 13 Nov 2024FG 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Existing 3D point cloud-based facial recognition struggles to fully leverage both global and local information inherent in the 3D point cloud data. In this paper, we introduce the PointFaceFormer, the first Transformer model designed for 3D point cloud face recognition. It incorporates an attention mechanism based on dot product and cosine functions to construct a similarity Transformer architecture, which effectively extracts both local and global features from the point cloud data. Experimental results demonstrate that PointFaceFormer achieves a recognition accuracy of 89.08% and a verification accuracy of 76.93% on the large-scale facial point cloud dataset Lock3DFace, which is a new state-of-the-art in 3D face recognition. Furthermore, PointFaceFormer exhibits excellent generalization performance on cross-quality datasets. Additionally, we validate the effectiveness of the attention mechanism through ablation experiments, which justify the effectiveness of the proposed modules.
Loading