Improving reinforcement learning based moving object grasping with trajectory prediction

Published: 01 Jan 2024, Last Modified: 13 Nov 2024Intell. Serv. Robotics 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Currently, most grasping systems are designed to grasp the static objects only, and grasping dynamic objects has received less attention in the literature. For the traditional manipulation scheme, achieving dynamic grasping requires either a highly precise dynamic model or sophisticated predefined grasping states and gestures, both of which are hard to obtain and tedious to design. In this paper, we develop a novel reinforcement learning (RL)-based dynamic grasping framework with a trajectory prediction module to address these issues. In particular, we divide dynamic grasping into two parts: RL-based grasping strategies learning and trajectory prediction. In the simulation setting, an RL agent is trained to grasp a static object. When this well-trained agent is transferred to the real world, the observation has been augmented with the predicted one from an LSTM-based trajectory prediction module. We validated the proposed method through an experimental setup involving a Baxter manipulator with two finger grippers and an object placed on a moving car. We also evaluated how well RL performs both with and without our intended trajectory prediction. Experiment results demonstrate that our method can grasp the object on different trajectories at various speeds.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview