Perception Updating Networks: On architectural constraints for interpretable video generative models

Invalid Date (modified: Feb 12, 2017) ICLR 2017 workshop submission readers: everyone
  • Abstract: We investigate a neural network architecture and statistical framework that models frames in videos using principles inspired by computer graphics pipelines. The proposed model explicitly represents "sprites" or its percepts inferred from maximum likelihood of the scene and infers its movement independently of its content. We impose architectural constraints that forces resulting architecture to behave as a recurrent what-where prediction network.
  • Keywords: Structured prediction, Unsupervised Learning
  • Conflicts:
0 Replies