Keywords: active learning, strategic classification, perceptron algorithm
TL;DR: Initiating active learning in strategic classification.
Abstract: Strategic classification is an emerging area of modern machine learning research that models scenarios where input features are provided by individuals who might manipulate them to receive better outcomes, e.g., in hiring, admissions, and loan decisions. Prior work has focused on supervised settings, where human experts label all training examples.
However, labeling all training data can be costly, as it requires expert intervention. In this work, we initiate the study of active learning for strategic classification, where the learning algorithm takes a much more active role compared to the classic fully supervised setting in order to learn with much fewer label requests.
Our main result provides an algorithm for actively learning linear separators in the strategic setting while preserving the exponential improvement in label complexity over passive learning previously achieved in the simpler non-strategic case. Specifically, we show that for data uniformly distributed over the unit sphere, a modified version of the Active Perceptron algorithm [Dasgupta et al. 2005, Yan and Zhang 2017], can achieve excess error $\epsilon$ after requesting only $ \tilde{O}\left(d \ln \frac{1}{\epsilon}\right)$
labels and making an additive $\tilde{O}\left(d \ln \frac{1}{\epsilon}\right)$ mistakes compared to the best classifier, when the $\tilde{\Omega}(\epsilon)$ fraction of the inputs are flipped. These algorithms are computationally efficient with number of label queries substantially better than prior work in strategic Perceptron [Ahmadi et al. 2021] under distributional assumptions.
Supplementary Material: zip
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 22097
Loading