DeformerNet based 3D Deformable Objects Shape Servo Control for Bimanual Robot Manipulation

Published: 01 Jan 2024, Last Modified: 13 Nov 2024ICIT 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the continuous advancement of industrial automation, there is an increasing demand for robots to autonomously perform tasks in the field of industrial manufacturing. In aerospace manufacturing, the primary focus is on rigid components, but there are also some parts with certain deformable properties, such as cables and long beams. If robots can manipulate 3D deformable objects, it can further enhance the level of automation in aerospace manufacturing. We propose a framework for dual robot collaborative grasping of 3D deformable objects. Unlike previous methods that rely on manually designed object shape features to train 3D deformable objects, we use the novel DeformerNet neural network architecture to overcome the limitations of traditional methods. The entire framework involves recording partial point clouds of the object and the target point cloud using an external camera. The point cloud data is then fed into the neural network to learn a low-dimensional representation of the object's shape. The robots learn to define a visual servo controller based on the point cloud information, generating Cartesian pose changes for the end effector, ultimately achieving shape servoing of the object. Meanwhile, in real aerospace component manufacturing scenarios, the assembled components are typically large in size, and a single robot may not be able to complete the assembly task. Therefore, our framework adopts a dual robot collaborative control approach to simulate the implementation scenario of this task. We validate the reliability of this framework in the Isaac Gym simulation environment.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview