Cross-domain Adaptive Transfer Reinforcement Learning Based on State-Action CorrespondenceDownload PDF

Published: 20 May 2022, Last Modified: 05 May 2023UAI 2022 PosterReaders: Everyone
Keywords: Transfer Learning, Deep Reinforcement Learning, Cross-domain, Policy Transfer, Knowledge Transfer
TL;DR: This paper proposes a novel cross-domain adaptive transfer framework for deep reinforcement learning, which adaptively transfers knowledge from multiple cross-domain policies to accelerate the policy learning in the target domain.
Abstract: Despite the impressive success achieved in various domains, deep reinforcement learning (DRL) is still faced with the sample inefficiency problem. Transfer learning (TL), which leverages prior knowledge from different but related tasks to accelerate the target task learning, has emerged as a promising direction to improve RL efficiency. The majority of prior work considers TL across tasks with the same state-action spaces, while transferring across domains with different state-action spaces is relatively unexplored. Furthermore, such existing cross-domain transfer approaches only enable transfer from a single source policy, leaving open the important question of how to best transfer from multiple source policies. This paper proposes a novel framework called Cross-domain Adaptive Transfer (CAT) to accelerate DRL. CAT learns the state-action correspondence from each source task to the target task and adaptively transfers knowledge from multiple source task policies to the target policy. CAT can be easily combined with existing DRL algorithms and experimental results show that CAT significantly accelerates learning and outperforms other cross-domain transfer methods on multiple continuous action control tasks.
Supplementary Material: zip
4 Replies