MobileAfford: Mobile Robotic Manipulation through Differentiable Affordance Learning

Published: 16 Apr 2024, Last Modified: 02 May 2024MoMa WS 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mobile manipulation, Affordance learning
TL;DR: A novel mobile manipulation strategy through differentiable affordance learning
Abstract: Mobile manipulation in diverse environments is essential yet challenging for robotic home assistants and flexible production. Point-level affordance, which predicts the per-point actionable score and thus proposes the best point to interact with, has demonstrated excellent performance and generalization capabilities in static manipulation. However, whether such actionable priors can be directly used for mobile manipulation remains untested. In this paper, we present a comprehensive differentiable-affordance-based learning framework, *MobileAfford*, which uses only visual input to guide the whole motion and manipulation process. We unify the motion and manipulation process for known and unknown objects in arbitrary environments into trajectory and target affordance optimization. We demonstrate the applicability of the framework in various experiments, including pushing and pulling known and unknown articulated objects on movable robot platforms. Experiment results showcase the state-of-the-art effectiveness of our approach.
Submission Number: 8
Loading