Target-Embedding Autoencoders for Supervised Representation Learning

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Autoencoder-based learning has emerged as a staple for disciplining representations in unsupervised and semi-supervised settings. This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional. We motivate and formalize the notion of target-embedding autoencoders (TEA) for supervised prediction, designed to learn intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets---encoding the prior that variations in targets are driven by a compact set of underlying factors. As our theoretical contribution, we provide a guarantee of generalization for linear TEAs by demonstrating uniform stability, interpreting the benefit of the auxiliary reconstruction task as a form of regularization. As our empirical contribution, we extend validation of this approach beyond the commonly-studied static domain to multivariate sequence forecasting, investigating the advantage that TEAs confer on both linear and nonlinear architectures.
  • Keywords: autoencoders, supervised learning, representation learning, target-embedding
0 Replies

Loading