Boundary on the Table: Efficient Black-Box Decision-Based Attacks for Structured Data

ICLR 2026 Conference Submission17511 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Black-box adversarial attacks, Decision-based attacks, Tabular data, Structured data security, Gradient-free optimization, Adversarial robustness, Machine learning security
TL;DR: We propose a black-box, decision-based attack leverages gradient-free direction estimation and iterative boundary search. Our method reducing model accuracy to near zero while maintaining domain validity constraints across diverse datasets.
Abstract: Adversarial robustness in structured data remains an underexplored frontier compared to vision and language domains. In this work, we introduce a novel black-box, decision-based adversarial attack tailored for tabular data. Our approach combines gradient-free direction estimation with an iterative boundary search, enabling efficient navigation of discrete and continuous feature spaces under minimal oracle access. Extensive experiments demonstrate that our method successfully compromises nearly the entire test set across diverse models, ranging from classical machine learning classifiers to large language model (LLM)-based pipelines. Remarkably, the attack achieves success rates consistently above 90%, while requiring only a small number of queries per instance. These results highlight the critical vulnerability of tabular models to adversarial perturbations, underscoring the urgent need for stronger defenses in real-world decision-making systems.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 17511
Loading