Semantic Noise Modeling for Better Representation Learning

Hyo-Eun Kim, Sangheum Hwang, Kyunghyun Cho

Nov 02, 2016 (modified: Dec 29, 2016) ICLR 2017 conference submission readers: everyone
  • Abstract: Latent representation learned from multi-layered neural networks via hierarchical feature abstraction enables recent success of deep learning. Under the deep learning framework, generalization performance highly depends on the learned latent representation. In this work, we propose a novel latent space modeling method to learn better latent representation. We designed a neural network model based on the assumption that good base representation for supervised tasks can be attained by maximizing the sum of hierarchical mutual informations between the input, latent, and output variables. From this base model, we introduce a semantic noise modeling method which enables semantic perturbation on the latent space to enhance the representational power of learned latent feature. During training, latent vector representation can be stochastically perturbed by a modeled additive noise while preserving its original semantics. It implicitly brings the effect of semantic augmentation on the latent space. The proposed model can be easily learned by back-propagation with common gradient-based optimization algorithms. Experimental results show that the proposed method helps to achieve performance benefits against various previous approaches. We also provide the empirical analyses for the proposed latent space modeling method including t-SNE visualization.
  • TL;DR: A novel latent space modeling method to learn better representation
  • Keywords: Deep learning, Supervised Learning
  • Conflicts: lunit.io, kaist.ac.kr, samsung.com, nyu.edu, umontreal.ca

Loading