Efficient Inference in Occlusion-Aware Generative Models of ImagesDownload PDF

19 Apr 2024 (modified: 15 Feb 2016)ICLR 2016 workshop submissionReaders: Everyone
CMT Id: 307
Abstract: We present a generative model of images based on layering, in which image layers are individually generated, then composited from front to back. We are thus able to factor the appearance of an image into the appearance of individual objects within the image — and additionally for each individual object, we can factor content from pose. Unlike prior work on layered models, we learn a shape prior for each object/layer, allowing the model to tease out which object is in front by looking for a consistent shape, without needing access to motion cues or any labeled data. We show that ordinary stochastic gradient variational bayes (SGVB), which optimizes our fully differentiable lower-bound on the log-likelihood, is sufficient to learn an interpretable representation of images. Finally we present experiments demonstrating the effectiveness of the model for inferring foreground and background objects in images.
Conflicts: stanford.edu, google.com, ucla.edu
0 Replies

Loading