Abstract: Learning approaches have a wide range of applications in the robotic manipulation field. However, traditional supervised or reinforcement learning methods tend to focus on the learning process under specific scenarios, ignoring the possibility of robot autonomous developmental learning driven by changing environment. In this work, we propose a human-like Autonomous Developmental Evolutionary Learning (ADEL) framework combining both genotype (evolution strategies) and phenotype (reinforcement learning), which helps robot learn manipulation tasks from mild environmental change in long time scale (evolution) and interact intensely in short-term range (learning). We introduce the Q-network to interact with the environment for learning manipulation policies and use an evolutionary algorithm to automatically optimize hyperparameters of the network. Moreover, a composite, variable reward function representation is proposed to improve the performance of our algorithm in different scenarios, which is also optimized by evolution. To demonstrate the performance of the proposed methods, we construct a series of scenarios for robot grasping learning. Experiment results show that by the proposed autonomous developmental evolutionary learning, robots learn grasping skills with a high success rate and small average steps cost, which suggests that it is possible for robots to learn manipulation skills autonomously, independently and continuously from scratch to adapt to complex environments.
0 Replies
Loading