Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Efficient Inference in Occlusion-Aware Generative Models of Images
Jonathan Huang, Kevin Murphy
Feb 15, 2016 (modified: Feb 15, 2016)ICLR 2016 workshop submissionreaders: everyone
Abstract:We present a generative model of images based on layering, in which image layers are individually generated, then composited from front to back. We are thus
able to factor the appearance of an image into the appearance of individual objects within the image — and additionally for each individual object, we can factor content from pose. Unlike prior work on layered models, we learn a shape prior for each object/layer, allowing the model to tease out which object is in front by looking for a consistent shape, without needing access to motion cues or any labeled data. We show that ordinary stochastic gradient variational bayes (SGVB), which optimizes our fully differentiable lower-bound on the log-likelihood, is sufficient to learn an interpretable representation of images. Finally we present experiments demonstrating the effectiveness of the model for inferring foreground and background objects in images.
Conflicts:stanford.edu, google.com, ucla.edu
Enter your feedback below and we'll get back to you as soon as possible.