Censoring Representations with Multiple-Adversaries over Random Subspaces

Yusuke Iwasawa, Kotaro Nakayama, Yutaka Matsuo

Feb 12, 2018 (modified: Feb 15, 2018) ICLR 2018 Workshop Submission readers: everyone Show Bibtex
  • Abstract: Adversarial feature learning has been successfully applied to censor the representations of neural networks; for example, AFL could help to learn anonymized representations to avoid privacy issues by constraining the representations with adversarial gradients that confuse the external discriminators that try to discern and extract sensitive information from the activations. In this paper, we propose the ensemble approach for the design of the discriminator based on the intuition that the discriminator need to be robust to the success of the AFL. The empirical validations on three user-anonymization tasks show that our proposed method achieves state-of-the-art performances in all three datasets without significantly harming the utility of data. We also provide initial theoretical results about the generalization error of the adversarial gradients, which suggest that the accuracy of the discriminator is not a deterministic factor for the design of the discriminator.
  • TL;DR: This paper improves the quality of the recently proposed adversarial feature leaning (AFL) approach for incorporating explicit constrains to representations, by introducing the concept of the {\em vulnerableness} of the adversary.
  • Keywords: Adversarial Training, Privacy Protection, Random Subspace
0 Replies

Loading