Generating Manipulation Sequences using Reinforcement Learning and Behavior Trees for Peg-In-Hole Task
Abstract: Reinforcement Learning (RL), a method of learning skills through trial-and-error, has been successfully used in many robotics applications in recent years. We combine manipulation primitives (MPs), behavior trees (BTs), and reinforcement learning to propose an algorithm for peg-in-hole tasks, which speeds up the convergence of the RL model and enhance the adaptability of the dynamic environment. Manipulation primitives are used as actions for RL, which can reduce the gap between control instruction and robotic actions and speed up the convergence of the RL model. Behavior trees are used as robot behavior control, which makes robots can actively adapt to the changes in the environment. In experiments, RL-BT, from the combination of RL and BT, is designed for the peg-in-hole task in the Gazebo simulation environment by using UR5 as the actuator. The experiments are conducted on a simple peg and a complex multi-hole peg by three aspects, which include convergence speed verification experiment, adaptability of dynamic environment experiment, and algorithm robustness experiment. The experiment result proves that our RL-BT can speed up the convergence and adapt to the changes in the environment.
Loading