Omni-Angle Assault: An Invisible and Powerful Physical Adversarial Attack on Face Recognition

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Deep learning models employed in face recognition (FR) systems have been shown to be vulnerable to physical adversarial attacks through various modalities, including patches, projections, and infrared radiation. However, existing adversarial examples targeting FR systems often suffer from issues such as conspicuousness, limited effectiveness, and insufficient robustness. To address these challenges, we propose a novel approach for adversarial face generation, UVHat, which utilizes ultraviolet (UV) emitters mounted on a hat to enable invisible and potent attacks in black-box settings. Specifically, UVHat simulates UV light sources via video interpolation and models the positions of these light sources on a curved surface, specifically the human head in our study. To optimize attack performance, UVHat integrates a reinforcement learning-based optimization strategy, which explores a vast parameter search space, encompassing factors such as shooting distance, power, and wavelength. Extensive experimental evaluations validate that UVHat substantially improves the attack success rate in black-box settings, enabling adversarial attacks from multiple angles with enhanced robustness.
Lay Summary: Face recognition systems are now part of everyday life, from unlocking phones to checking airport passports. But how safe are they? Our research shows that these systems can be fooled using ultraviolet light, which is invisible to human eyes. A person can look completely normal but still avoid being recognized by the camera. We used artificial intelligence to search for the most effective way to set up the UV light, including its brightness and position. This technique works in real-world settings and does not need access to the internal system. Our findings reveal that current face recognition systems are more vulnerable than people might expect, especially in real-world situations. By showing this hidden risk, we hope to raise awareness and encourage the development of more secure and reliable face recognition technology. As these systems spread, protecting them from invisible attacks becomes more important than ever.
Primary Area: Social Aspects->Security
Keywords: Physical adversarial attack, Face recognition, Robustness.
Submission Number: 4327
Loading