Keywords: federated learning, security, safety, facial authentication
TL;DR: Protection against Gradient Leakage attacks in Facial Authentication through mix-up augmentation.
Abstract: In the context of face recognition models, different facial features contribute unevenly to a model's ability to correctly identify individuals, making some features more critical and, therefore, more susceptible to attacks.
Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors, posing significant privacy challenges in distributed learning systems where clients share gradients. Data augmentation, a technique for artificially manipulating the training set by creating modified copies of existing data, plays a crucial role in improving the accuracy of deep learning models.
In this paper, we explore various data augmentation methods to protect original training images, in test time thereby enhancing security in distributed learning systems as well as increasing accuracy during training. Our experiments demonstrate that augmentation methods improve model performance during training on augmented images, and we can use the same methods during testing as perturbation methods to preserve some features of the image and have safety against DGL.
This project has four primary objectives: first, to develop a vision transformer face validation model that trains on distributed devices to ensure privacy; second, to utilize augmentation methods to perturb private images and increase neural network safety; and third, to provide protection against attacks, ensuring that reconstructing attacks cannot extract sensitive information from gradients at any point in the system.
and lastly we introduce a new novel perturbation method for a multi biometric authentication, system which offers accuracy for identification and guarantees safety and anonymity of entities.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12333
Loading