Kernel dynamic policy programming: Applicable reinforcement learning to robot systems with high dimensional statesDownload PDFOpen Website

09 Jun 2022OpenReview Archive Direct UploadReaders: Everyone
Abstract: We propose a new value function approach for model-free reinforcement learning in Markov decision processes involving high dimensional states that addresses the issues of brittleness and intractable computational complexity, therefore rendering the value function approach based reinforcement learning algorithms applicable to high dimensional systems. Our new algorithm, Kernel Dynamic Policy Programming (KDPP) smoothly updates the value function in accordance to the Kullback–Leibler divergence between current and updated policies. Stabilizing the learning in this manner enables the application of the kernel trick to value function approximation, which greatly reduces computational requirements for learning in high dimensional state spaces. The performance of KDPP against other kernel trick based value function approaches is first investigated in a simulated DOF manipulator reaching task, where only KDPP efficiently learned a viable policy at . As an application to a real world high dimensional robot system, KDPP successfully learned the task of unscrewing a bottle cap via a Pneumatic Artificial Muscle (PAM) driven robotic hand with tactile sensors; a system with a state space of dimensions, while given limited samples and with ordinary computing resources.
0 Replies

Loading