Mitigation of Adversarial Policy Imitation via Constrained Randomization of Policy (CRoP)Download PDF

22 Nov 2021, 06:35 (modified: 07 Dec 2021, 20:42)AAAI-22 AdvML Workshop ShortPaperReaders: Everyone
Keywords: Reinforcement Learning, Deep Reinforcement Learning, Model Extraction, Imitation Learning, Learning from Demonstration
TL;DR: We propose a novel mitigation technique based on constrained randomization of policy against adversarial stealing of DRL policies.
Abstract: Deep Reinforcement Learning (DRL) policies are vulnerable to unauthorized replication attacks, where an adversary exploits imitation learning to reproduce target policies from observed behavior. In this paper, we propose Constrained Randomization of Policy (CRoP) as a mitigation technique against such attacks. CRoP induces the execution of sub-optimal actions at random under performance loss constraints. We present a parametric analysis of CRoP, address the optimality of CRoP, and establish theoretical bounds on the adversarial budget and the expectation of loss. Furthermore, we report the experimental evaluation of CRoP in Atari environments under adversarial imitation, which demonstrate the efficacy and feasibility of our proposed method against policy replication attacks.
2 Replies

Loading