Neural Groundplans: Persistent Neural Scene Representations from a Single Image

ICLR 2023, Submission ID: 356

Overview


Novel View Synthesis and Static-Dynamic Disentanglement

Given a single image as input, our method represents the scene as a static and dynamic groundplan. This representation is then used to render novel views by compositing the contributions from the two groundplans using a neural renderer. Both static and dynamic groundplan can be rendered individually to present the static and dynamic (movable) parts of the scene respectively. In the video below, we show the composite rendering of the static and dynamic groundplans, as well as the static and dynamic groundplans rendered individually from circular camera trajectory.


Localization

Since our method computes independent groundplans for the static and dynamic (movable) components from the input image, the densities expressed by the dynamic groundplan enable segmentation in the bird's eye-view, 2D instance-level segmentation, and 3D bounding box prediction in an unsupervised setup.


Object-Centric Representation and Scene Editing

The reliable localization of the objects using the dynamic groundplan provides individual object-centric representation for all movable objects in the input image. These object-centric representations can be individually manipulated to edit the scene, enabling object deletion, insertion, and rearrangement.


Comparison for Novel View Synthesis