Dynamics Model Based Adversarial Training For Competitive Reinforcement Learning

Published: 31 Oct 2023, Last Modified: 25 Nov 2023MASEC@NeurIPS'23 PosterEveryoneRevisionsBibTeX
Keywords: Adversarial Training, Competitive Reinforcement Learning, Adversarial Robustness
TL;DR: We propose a dynamics model based adversarial training framework to train DRL agents robust against adversarial perturbations in two-agent games.
Abstract: Adversarial perturbations substantially degrade the performance of Deep Reinforcement Learning (DRL) agents, reducing the applicability of DRL in practice. Existing adversarial training for robustifying DRL uses the information of agent at the current step to minimize the loss upper bound introduced by adversarial input perturbations. It however only works well for single-agent tasks. The enhanced controversy in two-agent games introduces more dynamics and makes existing methods less effective. Inspired by model-based RL that builds a model for the environment transition probability, we propose a dynamics model based adversarial training framework for modeling multi-step state transitions. Our dynamics model transitively predicts future states, which can provide more precise back-propagated future information during adversarial perturbation generation, and hence improve the agent's empirical robustness substantially under different attacks. Our experiments on four two-agent competitive MuJoCo games show that our method consistently outperforms state-of-the-art adversarial training techniques in terms of empirical robustness and normal functionalities of DRL agents.
Submission Number: 19
Loading