Abstract: Active 3D reconstruction has a wide range of applications, including augmented/virtual reality and various robotics tasks such as navigation, object recognition and manipulation, and scene perception. Most existing approaches rely on offline images or online streams captured by human-operated cameras, requiring significant manual effort and depending on the operator’s expertise. Although some methods have explored using robots to assist data collection by planning the Next-Best-View and then moving the camera to cover these viewpoints, they are often constrained by fixed camera perspectives and suboptimal robot control strategies. These limitations result in inaccurate camera positioning and inaccessible viewpoints, leading to ineffective data collection. In this paper, we explore the feasibility of using a vehicle-arm collaborative robot to tackle the challenges of active 3D reconstruction. Specifically, we decompose this task into two mutually iterative subtasks: the object-centric planning task, which focuses solely on the objects to generate candidate viewpoints, and the agent-centric interaction task, where the robot moves the camera to cover these viewpoints. To achieve this, we propose a collaborative control framework that integrates the motions of the base and arm, enabling efficient recovery of the 3D shape of objects. Sufficient comparative and ablation studies validate that our approach achieves finer surface reconstruction with reduced reconstruction time.
External IDs:dblp:conf/ijcnn/LiuZCKYW25
Loading