COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning AttacksDownload PDF

Anonymous

Sep 29, 2021 (edited Oct 06, 2021)ICLR 2022 Conference Blind SubmissionReaders: Everyone
  • Keywords: certified robustness, poisoning attacks, reinforcement learning
  • Abstract: As reinforcement learning (RL) has achieved near human-level performance in a variety of tasks, its robustness has raised great attention when applied to safety-critical domains such as autonomous driving. Recent studies have explored the test-time attacks in RL and corresponding defenses, while the robustness of RL against training-time attacks remains largely unanswered. In this work, we focus on certifying the robustness of offline RL in the presence of poisoning attacks, where a subset of training trajectories could be arbitrarily manipulated. We propose the first certification framework COPA to certify the number of poisoning trajectories that can be tolerated regarding different certification criteria. Given the complex structure of RL, we propose two certification criteria: per-state action stability and cumulative reward bound. To tighten the certification, we also propose different partition and aggregation protocols to train robust policies. We further prove that some of the proposed certification methods are theoretically tight and some are NP-Complete problems. We conduct thorough evaluation of COPA on different games trained with different offline RL algorithms: (1) The proposed temporal aggregation in COPA significantly improves the certified robustness; (2) Our certifications for both per-state action stability and cumulative reward bound are efficient and tight; (3) The certification for different training algorithms and games are different, implying their intrinsic robustness properties.
  • One-sentence Summary: We propose the first framework for certifiying robustness of offline reinforcement learning against poisoning attacks.
  • Supplementary Material: zip
0 Replies

Loading