Reinforcement learning with Demonstrations from Mismatched Task under Sparse RewardDownload PDF

16 Jun 2022, 10:45 (modified: 13 Nov 2022, 10:19)CoRL 2022 PosterReaders: Everyone
Student First Author: yes
Keywords: Sparse Reward Reinforcement Learning, Learn from Demonstration, Task Mismatch
TL;DR: We propose CRSfD method to aid online reinforcement learning with demonstrations from mismatched task under sparse reward environment.
Abstract: Reinforcement learning often suffer from the sparse reward issue in real-world robotics problems. Learning from demonstration (LfD) is an effective way to eliminate this problem, which leverages collected expert data to aid online learning. Prior works often assume that the learning agent and the expert aim to accomplish the same task, which requires collecting new data for every new task. In this paper, we consider the case where the target task is mismatched from but similar with that of the expert. Such setting can be challenging and we found existing LfD methods may encounter a phenomenon called reward signal backward propagation blockages so that the agent cannot be effectively guided by the demonstrations from mismatched task. We propose conservative reward shaping from demonstration (CRSfD), which shapes the sparse rewards using estimated expert value function. To accelerate learning processes, CRSfD guides the agent to conservatively explore around demonstrations. Experimental results of robot manipulation tasks show that our approach outperforms baseline LfD methods when transferring demonstrations collected in a single task to other different but similar tasks.
Supplementary Material: zip
19 Replies