MixCon: Adjusting the Separability of Data Representations for Harder Data RecoveryDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: Data Recovery, Data Separability, Distributed Deep Learning
Abstract: To address the issue that deep neural networks (DNNs) are vulnerable to model inversion attacks, we design an objective function to adjust the separability of the hidden data representations as a way to control the trade-off between data utility and vulnerability to inversion attacks. Our method is motivated by the theoretical insights of data separability in neural networking training and results on the hardness of model inversion. Empirically, we show that there exist sweet-spots by adjusting the separability of data representation, such that it is difficult to recover data during inference while maintaining data utility.
One-sentence Summary: We investigate the trade-off between data utility and risk of data recovery from the angle of adjusting data separability.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Reviewed Version (pdf): https://openreview.net/references/pdf?id=s3G6EvDDiM
11 Replies

Loading