Improving Generalization in Visual Reinforcement Learning via Conflict-aware Gradient Agreement Augmentation
Abstract: Learning a policy with great generalization to unseen
environments remains challenging but critical in visual reinforcement learning. Despite the success of augmentation combination in the supervised learning generalization,
naively applying it to visual RL algorithms may damage
the training efficiency, suffering from serve performance
degradation. In this paper, we first conduct qualitative
analysis and illuminate the main causes: (i) high-variance
gradient magnitudes and (ii) gradient conflicts existed in
various augmentation methods. To alleviate these issues,
we propose a general policy gradient optimization framework, named Conflict-aware Gradient Agreement Augmentation (CG2A), and better integrate augmentation combination into visual RL algorithms to address the generalization
bias. In particular, CG2A develops a Gradient Agreement
Solver to adaptively balance the varying gradient magnitudes, and introduces a Soft Gradient Surgery strategy to alleviate the gradient conflicts. Extensive experiments demonstrate that CG2A significantly improves the generalization
performance and sample efficiency of visual RL algorithms.
Loading