Abstract: In federated learning (FL) environments, biometric authentication systems encounter a distinct challenge; safeguarding user privacy without sacrificing the precision necessary for identity confirmation. Although previous FL privacy research has primarily addressed broad-spectrum protections, this paper concentrates on the particular weaknesses of biometric authentication models, especially those susceptible to gradient inversion and deep gradient leakage (DGL) attacks.
We introduce an innovative privacy-preserving framework specifically designed for federated biometric authentication. Our approach employs a dual strategy: (1) an authentication model that is trained on both original and modified biometric samples to maintain resilience against input perturbations, and (2) a client-side obfuscation technique that alters biometric data prior to gradient computation, efficiently preventing reconstruction attempts. The obfuscation is adaptive and privacy-aware, selectively preserving critical biometric features necessary for authentication while discarding nonessential components to reduce input size and improve accuracy. Simultaneously, this process increases the gradient distance between the original and shared data, enhancing protection against reconstruction. Additionally, block-wise shuffling is employed to disrupt the semantic structure, ensuring that any reconstructed image lacks meaningful visual content.
To validate its practical use, our framework is tested in a multibiometric context using facial and fingerprint information. The blockwise transformation strategy ensures superior authentication efficiency while reducing privacy risks. Experiments conducted in various adversarial FL settings reveal that our method significantly enhances defenses against reconstruction attacks, outperforming traditional measures.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Antti_Koskela1
Submission Number: 5116
Loading