Generalization of Reinforcement Learning with Policy-Aware Adversarial Data AugmentationDownload PDF

28 May 2022 (modified: 05 May 2023)DARL 2022Readers: Everyone
Keywords: Reinforcement Learning, Generalization, Policy-Aware Adversarial Data Augmentation
TL;DR: We propose a novel policy-aware adversarial data augmentation method with automatically generated trajectory data to increase the generalization ability of Reinforcement Learning agents.
Abstract: The generalization gap in reinforcement learning (RL) has been a significant obstacle that prevents the RL agent from learning general skills and adapting to varying environments. Increasing the generalization capacity of the RL systems can significantly improve their performance on real-world working environments. In this work, we propose a novel policy-aware adversarial data augmentation method to augment the standard policy learning method with automatically generated trajectory data. Different from the observation transformation based data augmentations, our proposed method adversarially generates new trajectory data based on the policy gradient objective and aims to more effectively increase the RL agent’s generalization ability with the policy-aware data augmentation. Moreover, we further deploy a mixup step to integrate the original and generated data to enhance the generalization capacity while mitigating the over-deviation of the adversarial data. We conduct experiments on a number of RL tasks to investigate the generalization performance of the proposed method by comparing it with the standard baselines and the state-of-the-art mixreg approach. The results show our method can generalize well with limited training diversity, and achieve the state-of-the-art generalization test performance.
0 Replies

Loading