Decision-based evasion attacks on tree ensemble classifiersDownload PDFOpen Website

Published: 01 Jan 2020, Last Modified: 13 May 2023World Wide Web 2020Readers: Everyone
Abstract: Learning-based classifiers are found to be susceptible to adversarial examples. Recent studies suggested that ensemble classifiers tend to be more robust than single classifiers against evasion attacks. In this paper, we argue that this is not necessarily the case. In particular, we show that a discrete-valued random forest classifier can be easily evaded by adversarial inputs manipulated based only on the model decision outputs. The proposed evasion algorithm is gradient free and can be fast implemented. Our evaluation results demonstrate that random forests can be even more vulnerable than SVMs, either single or ensemble, to evasion attacks under both white-box and the more realistic black-box settings.
0 Replies

Loading