Keywords: strategic classification, online optimization, learning halfspaces with noise
Abstract: In this paper, we study an online strategic classification problem, where a principal aims to learn an accurate binary linear classifier from sequentially arriving agents. For each agent, the principal announces a classifier. The agent can strategically exercise costly manipulations on his features to be classified as the favorable positive class. The principal is unaware of the true feature-label distribution, but observes all reported features and only labels of positively classified agents. We assume that the true feature-label distribution is given by a halfspace model subject to arbitrary feature-dependent bounded noise (i.e., Massart Noise). This problem faces the combined challenges of agents' strategic feature manipulations, partial label observations, and label noises. We tackle these challenges by a novel learning algorithm. We show that the proposed algorithm yields classifiers that converge to the clairvoyant optimal one and attains a regret rate of $ O(\sqrt{T})$ up to poly-logarithmic and constant factors over $T$ cycles.
Supplementary Material: zip
Primary Area: Social and economic aspects of machine learning (e.g., fairness, interpretability, human-AI interaction, privacy, safety, strategic behavior)
Submission Number: 12733
Loading