Keywords: Renforcement Learning, Robustness, Adversarial Attack
TL;DR: We introduce a novel attack method for manipulating reinforcement learning agent's behavior using imitation learning, and a first defense strategy based on our theoretical analysis against such attacks.
Abstract: This study investigates behavior-targeted attacks on reinforcement learning and their countermeasures. Behavior-targeted attacks aim to manipulate the victim's behavior as desired by the adversary through adversarial interventions in state observations. Existing behavior-targeted attacks have some limitations, such as requiring white-box access to the victim's policy. To address this, we propose a novel attack method using imitation learning from adversarial demonstrations, which works under limited access to the victim's policy and is environment-agnostic. In addition, our theoretical analysis proves that the policy's sensitivity to state changes impacts defense performance, particularly in the early stages of the trajectory. Based on this insight, we propose time-discounted regularization, which enhances robustness against attacks while maintaining task performance. To the best of our knowledge, this is the first defense strategy specifically designed for behavior-targeted attacks.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 17406
Loading