Continuously Discovering Novel Strategies via Reward-Switching Policy OptimizationDownload PDF

12 Oct 2021 (modified: 05 May 2023)Deep RL Workshop NeurIPS 2021Readers: Everyone
Keywords: diverse behavior, deep reinforcement learning, multi-agent reinforcement learning
TL;DR: We propose a simple, generic and effective iterative learning algorithm, Reward-Switching PolicyOptimization (RSPO), for continuously discovering novel strategies.
Abstract: We present Reward-Switching Policy Optimization (RSPO), a paradigm to dis-cover diverse strategies in complex RL environments by iteratively finding novelpolicies that are both locally optimal and sufficiently different from existing ones.To encourage the learning policy to consistently converge towards a previouslyundiscovered local optimum, RSPO switches between extrinsic and intrinsic re-wards via a trajectory-based novelty measurement during the optimization process.When a sampled trajectory is sufficiently distinct, RSPO performs standard policyoptimization with extrinsic rewards. For trajectories with high likelihood underexisting policies, RSPO utilizes an intrinsic diversity reward to promote exploration.Experiments show that RSPO is able to discover a wide spectrum of strategies in avariety of domains, ranging from single-agent particle-world tasks and MuJoCocontinuous control to multi-agent stag-hunt games and StarCraftII challenges.
Supplementary Material: zip
0 Replies

Loading