Censoring Representations with Multiple-Adversaries over Random Subspaces

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: In practice, often there are explicit constraints on the representations that are acceptable in a machine learning application in the real world; for example, representations of data must not contain identifying information so as to avoid privacy issues. This paper improves the performance of the recently proposed adversarial feature learning approach (AFL) for incorporating such explicit constraints, by introducing the concept of the vulnerableness of the adversary. In AFL, the censoring representation is done by training the networks to deceive the adversary that try to predict the sensitive information from the network, and the success of the AFL relies on the choice of the adversary. This motivate the use of high capacity networks as an adversary for improving the performance; however, this approach does not work well in practice as reported in this paper. Instead of the capacity of networks, this paper proposes to consider the vulnerableness in design of the adversary, i.e., the adversary should be designed not to be easily fooled. We also propose a method multiple adversaries over random subspaces (MARS) that instantiate the concept, and provides empirical validations on the efficacy of the proposed method compared to the various baselines, indicating the importance of the proposed concept. This is significant because it gives new implications about designing the adversary, which is important to improve the performance of AFL framework.
  • TL;DR: This paper improves the quality of the recently proposed adversarial feature leaning (AFL) approach for incorporating explicit constrains to representations, by introducing the concept of the {\em vulnerableness} of the adversary.
  • Keywords: Adversarial Training, Privacy Protection, Random Subspace

Loading