Abstract: Many machine learning applications (credit scoring, fraud detection, etc.) use data in the tabular domains. Adversarial examples can be especially damaging for these applications. Yet, existing works on adversarial robustness mainly focus on machine-learning models in the image and text domains. We argue that due to the differences between tabular data and images or text, existing threat models are inappropriate for tabular domains. These models do not capture that cost can be more important than imperceptibility, nor that the adversary could ascribe different value to the utility obtained from deploying different adversarial examples. We show that due to these differences the attack and defence methods used for images and text cannot be directly applied to the tabular setup. We address these issues by proposing new cost and utility-aware threat models tailored to capabilities and constraints of attackers targeting tabular domains. We show that our approach is effective on two tabular datasets corresponding to applications for which adversarial examples can have economic and social implications.
1 Reply
Loading