Evasion Attacks Against Bayesian Predictive Models

Published: 07 May 2025, Last Modified: 13 Jun 2025UAI 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adversarial Machine Learning, Bayesian Models, Evasion Attacks
Abstract: There is an increasing interest in analyzing the behavior of machine learning systems against adversarial attacks. However, most of the research in adversarial machine learning has focused on studying weaknesses against evasion or poisoning attacks to predictive models in classical setups, with the susceptibility of Bayesian predictive models to attacks remaining underexplored. This paper introduces a general methodology for designing optimal evasion attacks against such models. We investigate two adversarial objectives: perturbing specific point predictions and altering the entire posterior predictive distribution. For both scenarios, we propose novel gradient-based attacks and study their implementation and properties in various computational setups.
Supplementary Material: zip
Latex Source Code: zip
Code Link: https://github.com/pablogarciarce/AdvReg
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission546/Authors, auai.org/UAI/2025/Conference/Submission546/Reproducibility_Reviewers
Submission Number: 546
Loading