Keywords: Robust optimization, Decision-dependent Uncertainty, Strategic Classification, Strategic Machine learning, Game theory.
TL;DR: We study the classification of rational agents who strategically modify their inputs in response to algorithmic systems, incurring a cost that depends on the classifiers' decision.
Abstract: Humans have been found to manipulate their inputs to algorithmic decision systems to receive favorable outcomes. This has motivated a line of work on ``strategic classification,'' wherein algorithmic decision rules are selected to prevent undesirable strategic responses. Prior works typically assume that the cost of such strategic behavior is fixed and independent of the classifier's decision. In practice, however, manipulation costs depend on past decisions: today's algorithmic decisions influence tomorrow's costs of strategic response. To capture this dependency, we propose to formulate the problem of strategic classification as a two-stage robust optimization problem with a decision-dependent uncertainty set. We formalize this problem, develop approximations and reformulations to solve it, and numerically illustrate our algorithm's ability to mitigate gaming of the algorithmic system.
Submission Number: 128
Loading