Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Generalizable Features From Unsupervised Learning
Mehdi Mirza, Aaron Courville, Yoshua Bengio
Feb 14, 2017 (modified: Feb 14, 2017)ICLR 2017 workshop submissionreaders: everyone
Abstract:Humans learn a predictive model of the world and use this model to reason about future events and the consequences of actions. In contrast to most machine predictors, we exhibit an impressive ability to generalize to unseen scenarios and reason intelligently in these settings. One important aspect of this ability is physical intuition(Lake et al., 2016). In this work, we explore the potential of unsupervised learning to find features that promote better generalization to settings outside the supervised training distribution. Our task is predicting the stability of towers of square blocks. We demonstrate that an unsupervised model, trained to predict future frames of a video sequence of stable and unstable block configurations, can yield features that support extrapolating stability prediction to blocks configurations outside the training set distribution
TL;DR:Using generated data from a next frame predictor model to make a supervised model generalize better to unseen distributions.
Enter your feedback below and we'll get back to you as soon as possible.