T-TD3: A Reinforcement Learning Framework for Stable Grasping of Deformable Objects Using Tactile Prior

Published: 01 Jan 2025, Last Modified: 11 Apr 2025IEEE Trans Autom. Sci. Eng. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Human tactile perception enables rapid assessment of deformable objects and the application of appropriate force to prevent slip or excessive deformation. However, this task remains challenging for robots. To address this issue, we propose the T-TD3 algorithm, which utilizes a multi-scale fusion neural network (MSF-Net) for the fused perception of multi-scale features, including the tactile prior information obtained through preprocessing. Our approach decomposes the robot task of grasping deformable objects into three subtasks: slip detection, stable grasping evaluation, and minimum grasping force tracking. We develop a simulation environment called CR5GraspStable-Env using PyBullet and TACTO for the network training. Our work reports a success rate of 94.81% in the robot task of grasping deformable objects in real, demonstrating an excellent sim-to-real capability. Moreover, the proposed approach has the potential to be extended to other stable grasping tasks that utilize tactile perception. Note to Practitioners—Traditional grasping strategies in stable grasping tasks typically apply significant grasping force to prevent slip. However, excessive grasping force for deformable objects may lead to excessive deformation and damage. While humans can flexibly control grasping force through tactile perception, it poses a significant challenge for robots. To overcome this challenge, we propose a novel method for the robot to learn an improved stable grasping strategy by incorporating tactile priors. Our method establishes a unified tactile prior representation for the visual-based tactile sensors mounted on the robot grippers, which enables them to sense the contact state. Additionally, it utilizes sensor distortion correction based on spatial symmetry to ensure the applicability of the tactile prior representation to any kind of visual-based tactile sensor. Furthermore, this method integrates multi-scale tactile priors and robot states, and utilizes reinforcement learning to autonomously make real-time decisions, aiming to minimize the grasping force while maintaining stable grasps on deformable objects. The primary research objective of this article is to address the challenge of achieving stable grasps on variable objects, while also being applicable to grasping rigid objects. We validated the practicality of this method by successfully achieving a 94.81% success rate in stably grasping deformable objects using the robotic arm CR5 in both simulated and real-world environments. Furthermore, this method can be readily applied to other robot systems. In the future, we plan to extend the application of our proposed method to more dexterous manipulators and perform more complex manipulation tasks. Additionally, we aim to introduce new fusion algorithms and decision-making strategies to further enhance the applicability of this method.
Loading