Abstract: Face anti-spoofing technique is a critical preset security measure for face recognition system. However, most existing detection techniques can only detect face spoofing and cannot provide fine-grained information on the type of spoofing attack. Additionally, DNN-based detection models lack the ability to extract diverse attack features distributed across different network depths and have a high number of parameters, making them challenging to train. To address these issues, we propose a face anti-spoofing method based on recursive self-attention and multi-scale fusion. Firstly, we introduce the design of recursive neural networks into self-attention mechanisms, combining multiple shared-weight self-attention blocks in a recursive form to extract features in the deep layer of the model while reducing the number of parameters. Furthermore, we combine atrous convolution with self-attention mechanisms, using atrous convolution with different atrous rates to provide multi-scale fusion capabilities for self-attention mechanisms. Extensive experiments on benchmark datasets demonstrate that our proposed method achieves superior performance while remarkably reducing the number of model parameters.