Censoring Representations with Multiple-Adversaries over Random SubspacesDownload PDF

15 Feb 2018 (modified: 15 Feb 2018)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Adversarial feature learning (AFL) is one of the promising ways for explicitly constrains neural networks to learn desired representations; for example, AFL could help to learn anonymized representations so as to avoid privacy issues. AFL learn such a representations by training the networks to deceive the adversary that predict the sensitive information from the network, and therefore, the success of the AFL heavily relies on the choice of the adversary. This paper proposes a novel design of the adversary, {\em multiple adversaries over random subspaces} (MARS) that instantiate the concept of the {\em volunerableness}. The proposed method is motivated by an assumption that deceiving an adversary could fail to give meaningful information if the adversary is easily fooled, and adversary rely on single classifier suffer from this issues. In contrast, the proposed method is designed to be less vulnerable, by utilizing the ensemble of independent classifiers where each classifier tries to predict sensitive variables from a different {\em subset} of the representations. The empirical validations on three user-anonymization tasks show that our proposed method achieves state-of-the-art performances in all three datasets without significantly harming the utility of data. This is significant because it gives new implications about designing the adversary, which is important to improve the performance of AFL.
TL;DR: This paper improves the quality of the recently proposed adversarial feature leaning (AFL) approach for incorporating explicit constrains to representations, by introducing the concept of the {\em vulnerableness} of the adversary.
Keywords: Adversarial Training, Privacy Protection, Random Subspace
9 Replies

Loading