Close-to-clean regularization relates virtual adversarial training, ladder networks and others

Mudassar Abbas, Jyri Kivinen, Tapani Raiko

Feb 17, 2016 (modified: Feb 17, 2016) ICLR 2016 workshop submission readers: everyone
  • Abstract: We propose a regularization framework where we feed an original clean data point and a nearby point through a mapping, which is then penalized by the Euclidian distance between the corresponding outputs. The nearby point may be chosen randomly or adversarially. We relate this framework to many existing regularization methods: It is a stochastic estimate of penalizing the Frobenius norm of the Jacobian of the mapping as in Poggio & Girosi (1990), it generalizes noise regularization (Sietsma & Dow, 1991), and it is a simplification of the canonical regularization term by the ladder networks in Rasmus et al. (2015). We also study the connection to virtual adversarial training (VAT) (Miyato et al., 2016) and show how VAT can be interpreted as penalizing the largest eigenvalue of a Fisher information matrix. Our main contribution is discovering connections between the proposed and existing regularization methods.
  • Conflicts: aalto.fi, zenrobotics.com, thecuriousaicompany.com, ed.ac.uk, umontreal.ca

Loading