MFGS: Mask-free Gaussian Separation for 3D Object Reconstruction

Published: 01 Sept 2026, Last Modified: 24 Mar 2026Pattern RecognitionEveryoneCC BY 4.0
Abstract: Accurate 3D reconstruction from multi-view images is a fundamental problem in computer vision. A common acquisition strategy involves placing an object on a rotating turntable while moving the camera to capture it from various viewpoints. In such scenarios, object moves relative to the background, many existing reconstruction methods rely on object masks to separate the foreground from the background. The quality of these masks significantly affects the final reconstruction, yet obtaining high-quality and consistent masks is a challenging and laborious process, especially when controlled environments like green screens are unavailable. To address this limitation, we introduce Mask-free Gaussian Separation (MFGS), a novel method that performs simultaneous object reconstruction and segmentation without requiring any input masks. Our approach builds on Gaussian Splatting and automatically disentangles the scene by extending each Gaussian primitive with a learnable parameter that represents its probability of belonging to the dynamic foreground object. This separation is optimized in a self-supervised manner, optimized by the object and camera transformation constraints. We evaluated MFGS on new synthetic and real-world datasets designed to reflect this challenging capture scenario. Experimental results demonstrate that our mask-free approach significantly outperforms existing methods. Notably, MFGS surpasses the performance of the state-of-the-art method(2DGS) that relies on high-quality segmentation masks, achieving a 27% improvement in novel view synthesis and a 7% improvement in geometry reconstruction.
Loading