Off-Dynamics Reinforcement Learning: Training for Transfer with Domain ClassifiersDownload PDF

12 Jun 2020 (modified: 22 Oct 2023)LifelongML@ICML2020Readers: Everyone
Student First Author: Yes
Keywords: reinforcement learning, transfer learning, domain adaptation
Abstract: We propose a simple, practical, and intuitive approach for domain adaptation in reinforcement learning. Our approach stems from the idea that the agent's experience in the source domain should look similar to its experience in the target domain. Building off of a probabilistic view of RL, we formally show that we can achieve this goal by compensating for the difference in dynamics by modifying the reward function. This modified reward function is simple to estimate by learning auxiliary classifiers that distinguish source-domain transitions from target-domain transitions. Intuitively, the modified reward function penalizes the agent for visiting states and taking actions in the source domain which are not possible in the target domain. Our approach is applicable to domains with continuous states and actions and does not require learning a model of the dynamics.
TL;DR: Domain adaptation in RL by compensating for the difference in dynamics by modifying the reward function
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2006.13916/code)
0 Replies

Loading