LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial RecognitionDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 PosterReaders: Everyone
Keywords: facial recognition, adversarial attacks
Abstract: Facial recognition systems are increasingly deployed by private corporations, government agencies, and contractors for consumer services and mass surveillance programs alike. These systems are typically built by scraping social media profiles for user images. Adversarial perturbations have been proposed for bypassing facial recognition systems. However, existing methods fail on full-scale systems and commercial APIs. We develop our own adversarial filter that accounts for the entire image processing pipeline and is demonstrably effective against industrial-grade pipelines that include face detection and large scale databases. Additionally, we release an easy-to-use webtool that significantly degrades the accuracy of Amazon Rekognition and the Microsoft Azure Face Recognition API, reducing the accuracy of each to below 1%.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We leverage adversarial attacks in our tool, LowKey, which protects social media users from invasive mass surveillance systems.
Supplementary Material: zip
Data: [Perceptual Similarity](https://paperswithcode.com/dataset/perceptual-similarity), [UMDFaces](https://paperswithcode.com/dataset/umdfaces)
9 Replies

Loading