Robust Deep Reinforcement Learning with Adversarial Attacks

Published: 01 Jan 2018, Last Modified: 27 Sept 2025AAMAS 2018EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper proposes adversarial attacks for Reinforcement Learning (RL). These attacks are then leveraged during training to improve the robustness of RL within robust control framework. We show that this adversarial training of DRL algorithms like Deep Double Q learning and Deep Deterministic Policy Gradients leads to significant increase in robustness to parameter variations for RL benchmarks such as Mountain Car and Hopper environment. Full paper is available at (https://arxiv.org/abs/1712.03632)[32]
Loading