Keywords: Deformable, Non-rigid, Manipulation, Relative Placement
TL;DR: We expand the "relative placement" formulation beyond rigid body manipulation, allowing for generalization to novel object instances, unseen scene configurations, and multimodal placements for highly deformable tasks.
Abstract: The task of "relative placement" is to predict the placement of one object in relation to another, e.g. placing a mug on a mug rack. Recent methods for relative placement have made tremendous progress towards data-efficient learning for robot manipulation; using explicit object-centric geometric reasoning, these approaches enable generalization to unseen task variations from a small number of demonstrations. State-of-the-art works in this area, however, have yet to represent deformable transformations, despite the ubiquity of non-rigid bodies in real world settings. As a first step towards bridging this gap, we propose "cross-displacement" - an extension of the principles of relative placement to geometric relationships between deformable objects - and present a novel vision-based method to learn cross-displacement for a non-rigid task through dense diffusion. To this end, we demonstrate our method's ability to generalize to unseen object instances, out-of-distribution scene configurations, and multimodal goals on a highly deformable cloth-hanging task beyond the scope of prior works.
Supplementary Material: zip
Spotlight Video: mp4
Website: https://sites.google.com/view/tax3d-corl-2024
Publication Agreement: pdf
Student Paper: yes
Submission Number: 180
Loading