Generalizable Features From Unsupervised Learning

Mehdi Mirza, Aaron Courville, Yoshua Bengio

Feb 14, 2017 (modified: Feb 14, 2017) ICLR 2017 workshop submission readers: everyone
  • Abstract: Humans learn a predictive model of the world and use this model to reason about future events and the consequences of actions. In contrast to most machine predictors, we exhibit an impressive ability to generalize to unseen scenarios and reason intelligently in these settings. One important aspect of this ability is physical intuition(Lake et al., 2016). In this work, we explore the potential of unsupervised learning to find features that promote better generalization to settings outside the supervised training distribution. Our task is predicting the stability of towers of square blocks. We demonstrate that an unsupervised model, trained to predict future frames of a video sequence of stable and unstable block configurations, can yield features that support extrapolating stability prediction to blocks configurations outside the training set distribution
  • TL;DR: Using generated data from a next frame predictor model to make a supervised model generalize better to unseen distributions.
  • Conflicts: umontreal.ca, google.com

Loading