Perception Updating Networks: On architectural constraints for interpretable video generative models
Abstract: We investigate a neural network architecture and statistical framework that models frames in videos using principles inspired by computer graphics pipelines. The proposed model explicitly represents "sprites" or its percepts inferred from maximum likelihood of the scene and infers its movement independently of its content. We impose architectural constraints that forces resulting architecture to behave as a recurrent what-where prediction network.
Keywords: Structured prediction, Unsupervised Learning
Conflicts: ufl.edu
0 Replies
Loading