Unlearning Paradox: Auditing Residual Identity Traces in Face Recognition

ICLR 2026 Conference Submission20967 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Face Recognition, Machine Unlearning
Abstract: Face recognition systems raise a critical privacy question: how do we prove that a person’s biometric data has been deleted when laws such as GDPR or CCPA require it? We highlight an unlearning paradox - A model can still verify “forgotten” identities because face recognition works in an open set, where unseen identities remain recognizable. This makes standard accuracy-based tests misleading. We contribute three ideas. (1) We formalize this paradox and show why current metrics give a false sense of forgetting. (2) We design a generative auditing framework that reconstructs faces from embeddings, exposing that existing methods keep up to 57\% of identity information even when they appear to succeed. (3) We propose FUSE (Forgetting Using Structural Erasure), which treats identities as hypercones and erases them with region-aware surrogates while preserving recognition of others. On CASIA-WebFace and D-LORD, FUSE reduce the amount of semantic residual ($>$0.6) for forget set while retaining high verification for non-target classes. Our work shifts evaluation from accuracy to semantics, setting stronger privacy standards for face recognition.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 20967
Loading