A deep learning approach for seamless integration of cognitive skills for humanoid robotsDownload PDFOpen Website

2016 (modified: 04 Nov 2022)ICDL-EPIROB 2016Readers: Everyone
Abstract: This study investigates the seamless integration of cognitive skills, such as visual recognition, attention switching, action preparation and generation for a humanoid robot. In our preliminary study [1], the deep dynamic neural network model was introduced to process spatio-temporal visuomotor patterns. In the current study, we extended the previous model further to enhance its capability of handling sequential visuomotor information as well as forming visuomotor representation. We conducted synthetic robotic experiments in which a robot learned goal-directed actions of reaching to grasp objects under two different experimental settings. In the first experiment, a task of reaching to grasp objects was conducted under parameterized visual occlusion condition for the purpose of examining the memory capability in the model. In the second experiment, the action of reaching to grasp objects was incorporated with visual recognition of human gesture patterns with using the working memory. The experimental results revealed that the proposed model was able to generalize its reaching and grasping skills in the novel situations. Furthermore, the analysis using the dimensionality reduction technique on neuron activation verified that the proposed model was capable of manipulating high dimensional spatio-temporal visuomotor patterns by forming their dynamic link to the actional intention developed in the higher level of the model via iterative learning.
0 Replies

Loading