Total-Decom: Decomposed 3D Scene Reconstruction with Minimal Interaction

Published: 01 Jan 2024, Last Modified: 05 Mar 2025CVPR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Scene reconstruction from multi-view images is a fun- damental problem in computer vision and graphics. Re- cent neural implicit surface reconstruction methods have achieved high-quality results; however, editing and ma- nipulating the 3D geometry of reconstructed scenes re- mains challenging due to the absence of naturally decom- posed object entities and complex object/background com- positions. In this paper, we present Total-Decom, a novel method for decomposed 3D reconstruction with minimal human interaction. Our approach seamlessly integrates the Segment Anything Model (SAM) with hybrid implicit- explicit neural surface representations and a mesh-based region-growing technique for accurate 3D object decom- position. Total-Decom requires minimal human annotations while providing users with real-time control over the granularity and quality of decomposition. We extensively evaluate our method on benchmark datasets and demon- strate its potential for downstream applications, such as animation and scene editing. The code is available at https://github.com/CVMI-Lab/Total-Decom.git.
Loading