Neural Attention Field: Emerging Point Relevance in 3D Scenes for One-Shot Dexterous Grasping

Published: 05 Sept 2024, Last Modified: 22 Oct 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Desterous Grasping, One-Shot Manipulation, Distilled Feature Field, Neural Implicit Field, Self-Supervised Learning
Abstract: One-shot transfer of dexterous grasps to novel scenes with object and context variations has been a challenging problem. While distilled feature fields from large vision models have enabled semantic correspondences across 3D scenes, their features are point-based and restricted to object surfaces, limiting their capability of modeling complex semantic feature distributions for hand-object interactions. In this work, we propose the *neural attention field* for representing semantic-aware dense feature fields in the 3D space by modeling inter-point relevance instead of individual point features. Core to it is a transformer decoder that computes the cross-attention between any 3D query point with all the scene points, and provides the query point feature with an attention-based aggregation. We further propose a self-supervised framework for training the transformer decoder from only a few 3D pointclouds without hand demonstrations. Post-training, the attention field can be applied to novel scenes for semantics-aware dexterous grasping from one-shot demonstration. Experiments show that our method provides better optimization landscapes by encouraging the end-effector to focus on task-relevant scene regions, resulting in significant improvements in success rates on real robots compared with the feature-field-based methods.
Supplementary Material: zip
Spotlight Video: mp4
Publication Agreement: pdf
Student Paper: yes
Submission Number: 42
Loading