Reasoning About Physical Interactions with Object-Oriented Prediction and PlanningDownload PDF

Sep 27, 2018 (edited Jan 04, 2019)ICLR 2019 Conference Blind SubmissionReaders: Everyone
  • Abstract: Object-based factorizations provide a useful level of abstraction for interacting with the world. Building explicit object representations, however, often requires supervisory signals that are difficult to obtain in practice. We present a paradigm for learning object-centric representations for physical scene understanding without direct supervision of object properties. Our model, Object-Oriented Prediction and Planning (O2P2), jointly learns a perception function to map from image observations to object representations, a pairwise physics interaction function to predict the time evolution of a collection of objects, and a rendering function to map objects back to pixels. For evaluation, we consider not only the accuracy of the physical predictions of the model, but also its utility for downstream tasks that require an actionable representation of intuitive physics. After training our model on an image prediction task, we can use its learned representations to build block towers more complicated than those observed during training.
  • Keywords: structured scene representation, predictive models, intuitive physics, self-supervised learning
  • TL;DR: We present a framework for learning object-centric representations suitable for planning in tasks that require an understanding of physics.
13 Replies