Abstract: Physically realizable adversarial patterns have be-come a sophisticated form of adversarial ML across industry and academia, with several algorithms that successfully thwart state-of-the-art ML models. However, there are currently no standard practices for evaluating adversarial patterns. We identify two components that provide insight into an adversarial pattern's performance: the seeding of the algorithm for producing the adversarial pattern and the use of control patterns (patterns that serve as the baseline comparisons to the adversarial pattern). In this study, we implement and compare the performance of a variety of control patterns (solid white, solid gray, solid black, and random noise). We train state-of-the-art DNN object detection models on an open-source dataset. Using the trained models, we evaluate performance on the various control patterns in order to establish performance baselines for current and future adversarial pattern algorithms.
Loading