Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Investigating Deep Reinforcement Learning For Grasping Objects With An Anthropomorphic Hand
Mayur Mudigonda, Pulkit Agrawal, Michael Deweese, Jitendra Malik
Feb 12, 2018 (modified: Feb 12, 2018)ICLR 2018 Workshop Submissionreaders: everyone
Abstract:Grasping objects with high dimensional controllers such as an anthropomorphic hand using reinforcement learning is a challenging problem. In this work we experiment with a 16-D simulated version of a prosthetic hand developed for SouthHampton Hand Assessment Procedure (SHAP). We demonstrate that it is possible to learn successful grasp policies for an anthropomorphic hand from scratch using deep reinforcement learning. We find that our grasping model is robust to sensor noise, variations in object shape, position of the object and physical parameters such as the density of the object. Under these variations, we also investigate the utility of touch sensing for grasping objects. We believe that our results and analysis provide useful insights and strong baselines for future research into the exciting direction of object manipulation with anthropomorphic hands using proprioceptive and other sensory feedback.
TL;DR:We show that we can learn to control a high dimensional dexterous hand to grasp objects using Deep RL. Additionally we show generalization experiments.
Keywords:grasping, dexterous manipulation, deep reinforcement learning, haptics
Enter your feedback below and we'll get back to you as soon as possible.