Keywords: multi-source transfer learning, world models, model-based reinforcement learning, sample efficiency, cross-domain transfer learning
TL;DR: Modular multi-source transfer learning techniques for model-based reinforcement learning that autonomously learn to extract information from a set of source tasks, regardless of differences between environments.
Abstract: A crucial challenge in reinforcement learning is to reduce the number of interactions with the environment that an agent requires to master a given task. Transfer learning proposes to address this issue by re-using knowledge from previously learned tasks. However, determining which source task qualifies as optimal for knowledge extraction, as well as the choice regarding which algorithm components to transfer, represent severe obstacles to its application in reinforcement learning. The goal of this paper is to alleviate these issues with modular multi-source transfer learning techniques. Our proposed methodologies automatically learn how to extract useful information from source tasks, regardless of the difference in state-action space and reward function. We support our claims with extensive and challenging cross-domain experiments for visual control.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/multi-source-transfer-learning-for-deep-model/code)
0 Replies
Loading