Action Poisoning Attacks on Linear Contextual Bandits

Published: 06 Mar 2023, Last Modified: 06 Mar 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Contextual bandit algorithms have many applicants in a variety of scenarios. In order to develop trustworthy contextual bandit systems, understanding the impacts of various adversarial attacks on contextual bandit algorithms is essential. In this paper, we propose a new class of attacks: action poisoning attacks, where an adversary can change the action signal selected by the agent. We design action poisoning attack schemes against disjoint linear contextual bandit algorithms in both white-box and black-box settings. We further analyze the cost of the proposed attack strategies for a very popular and widely used bandit algorithm: LinUCB. We show that, in both white-box and black-box settings, the proposed attack schemes can force the LinUCB agent to pull a target arm very frequently by spending only logarithm cost. We also extend the proposed attack strategies to generalized linear models and show the effectiveness of the proposed strategies.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Chicheng_Zhang1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 616
Loading