Learning Any-View 6DoF Robotic Grasping in Cluttered Scenes via Neural Surface Rendering

Published: 01 Jul 2024, Last Modified: 08 Jul 2024GAS @ RSS 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural Geometric Representations, Surface Rendering, Robotic Grasping
TL;DR: A geometric re-interpretation of robotic grasping as neural surface rendering for learning global and local representations that enable effective any-view grasping
Abstract: A significant challenge for real-world robotic manipulation is the effective 6DoF grasping of objects in cluttered scenes from any single viewpoint without needing additional scene exploration. This work re-interprets grasping as rendering and introduces NeuGraspNet, a novel method for 6DoF grasp detection that leverages advances in neural geometric representations and surface rendering. We encode the interaction between a robot's end-effector and an object's surface by jointly learning to render the local object surface and learning grasping functions in a shared feature space. Our approach uses global (scene-level) features for grasp generation and local (grasp-level) neural surface features for grasp evaluation. This enables effective, fully implicit 6DoF grasp quality prediction, even in partially observed scenes. NeuGraspNet operates on random viewpoints, common in mobile manipulation scenarios, and outperforms existing implicit and semi-implicit grasping methods. We demonstrate the real-world applicability of the method with a mobile manipulator robot, grasping in open cluttered spaces.
Submission Number: 3
Loading