Generalizable Features From Unsupervised LearningDownload PDF

03 Nov 2024 (modified: 22 Oct 2023)ICLR 2017 Invite to WorkshopReaders: Everyone
TL;DR: Using generated data from a next frame predictor model to make a supervised model generalize better to unseen distributions.
Abstract: Humans learn a predictive model of the world and use this model to reason about future events and the consequences of actions. In contrast to most machine predictors, we exhibit an impressive ability to generalize to unseen scenarios and reason intelligently in these settings. One important aspect of this ability is physical intuition(Lake et al., 2016). In this work, we explore the potential of unsupervised learning to find features that promote better generalization to settings outside the supervised training distribution. Our task is predicting the stability of towers of square blocks. We demonstrate that an unsupervised model, trained to predict future frames of a video sequence of stable and unstable block configurations, can yield features that support extrapolating stability prediction to blocks configurations outside the training set distribution
Conflicts: umontreal.ca, google.com
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1612.03809/code)
0 Replies

Loading