Abstract: Reconstructing the complete geometry of a 3D scene is essential for real-world applications involving service robotics or autonomous vehicles. A promising approach to achieve this is a combination of 3D reconstruction and semantic completion. However, the dataset for evaluating this is limited to static environments, even though real-world environments are dynamic, such as those containing moving objects. Building a dataset of real-world dynamic environments to reveal the impact of dynamic objects on 3D reconstruction is an important factor in moving the field forward. To this end, we propose a method to synthesize a dynamic 3D scene with moving objects. The key consideration is to composite naturally moving objects into the 3D scene. We adopt humans as moving objects and utilize a method to generate natural human motion. The generated human motion is composited into a static 3D scene and rendered with the specified camera path. We use the data acquired from this process to evaluate 3D reconstruction with semantic completion. In addition, we analyze the relationship between the percentage of frames occupied by dynamic objects and accuracy to reveal the impact of dynamic objects. The code is available at https://github.com/zhouqinyuanrunner/Dyna3DBench.
Loading