Bounding the Invertibility of Privacy-preserving Instance Encoding using Fisher Information

Published: 21 Sept 2023, Last Modified: 11 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: privacy, instance encoding, split learning
TL;DR: We propose a theoretically-principled framework to bound the invertibility of instance encoding using Fisher information leakage. Our bound works for arbitrary (biased or unbiased) attackers and real-world datasets with an intractible prior.
Abstract: Privacy-preserving instance encoding aims to encode raw data into feature vectors without revealing their privacy-sensitive information. When designed properly, these encodings can be used for downstream ML applications such as training and inference with limited privacy risk. However, the vast majority of existing schemes do not theoretically justify that their encoding is non-invertible, and their privacy-enhancing properties are only validated empirically against a limited set of attacks. In this paper, we propose a theoretically-principled measure for the invertibility of instance encoding based on Fisher information that is broadly applicable to a wide range of popular encoders. We show that dFIL can be used to bound the invertibility of encodings both theoretically and empirically, providing an intuitive interpretation of the privacy of instance encoding.
Supplementary Material: pdf
Submission Number: 9547
Loading