Unsupervised 3D Scene Representation Learning via Movable Object Inference

Published: 27 Mar 2024, Last Modified: 27 Mar 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Unsupervised, category-agnostic, object-centric 3D representation learning for complex scenes remains an open problem in computer vision. While a few recent methods can discover 3D objects from a single image, they remain struggling on scenes with diverse and complex object configurations as they discover objects mostly by appearance similarity which is insufficient for textured objects. In this work, we propose Movable Object Radiance Fields (MORF), aiming at scaling to complex scenes with diverse categories of objects. Inspired by cognitive science studies of object learning in babies, MORF learns 3D object representations via movable object inference. While obtaining 3D movable object signals requires multi-view videos of moving objects, we propose lifting a 2D movable object inference module that can be unsupervisedly pretrained on monocular videos. Thus, MORF requires only multi-view images of static training scenes. During testing, MORF can discover, reconstruct, and move unseen objects from novel categories, all from a single image of novel scenes. We propose a challenging simulated dataset with a diverse set of textured objects for training and testing. Experiments show that MORF extracts accurate object geometry and supports realistic object and scene reconstruction and editing, significantly outperforming the state-of-the-art.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=lbWjeHMA5T
Changes Since Last Submission: Minor revisions in the final claims-oriented paragraph of the introduction and the abstract to specify the dataset on which MORF is demonstrated
Code: https://github.com/neuroailab/MORF.git
Supplementary Material: zip
Assigned Action Editor: ~David_Fouhey2
Submission Number: 1556
Loading