Environment Probing Interaction PoliciesDownload PDF

27 Sept 2018, 22:38 (modified: 10 Feb 2022, 11:41)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Keywords: Reinforcement Learning
Abstract: A key challenge in reinforcement learning (RL) is environment generalization: a policy trained to solve a task in one environment often fails to solve the same task in a slightly different test environment. A common approach to improve inter-environment transfer is to learn policies that are invariant to the distribution of testing environments. However, we argue that instead of being invariant, the policy should identify the specific nuances of an environment and exploit them to achieve better performance. In this work, we propose the “Environment-Probing” Interaction (EPI) policy, a policy that probes a new environment to extract an implicit understanding of that environment’s behavior. Once this environment-specific information is obtained, it is used as an additional input to a task-specific policy that can now perform environment-conditioned actions to solve a task. To learn these EPI-policies, we present a reward function based on transition predictability. Specifically, a higher reward is given if the trajectory generated by the EPI-policy can be used to better predict transitions. We experimentally show that EPI-conditioned task-specific policies significantly outperform commonly used policy generalization methods on novel testing environments.
Code: [![github](/images/github_icon.svg) Wenxuan-Zhou/EPI](https://github.com/Wenxuan-Zhou/EPI)
Data: [OpenAI Gym](https://paperswithcode.com/dataset/openai-gym)
9 Replies