everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
Visual adversarial examples have been so far restricted to pixel-level image manipulations in the digital world, or have required sophisticated equipment such as 2D or 3D printers to be produced in the physical real world. We present the first ever method of generating human-producible adversarial examples for the real world that requires nothing more complicated than a marker pen. We call them $\textbf{\textit{adversarial tags}}$. First, building on top of differential rendering, we demonstrate that it is possible to build potent adversarial examples with just lines. We find that by drawing just $4$ lines we can disrupt a YOLO-based model in $54.8$% of the cases. Increase this to $9$ lines disrupts $81.8$% of tested cases. Next, we devise an improved method for placement of the lines to be invariant to human drawing error. We thoroughly evaluate our system in both digital and analogue worlds and demonstrate that our tags can be applied by untrained humans. We demonstrate the effectiveness of our method for producing real world adversarial examples by conducting a user study whereby participants were asked to draw over printed-out images using the digital equivalents as guides. We further evaluate the effectiveness of both targeted and untargeted attacks, and discuss various trade-offs and method limitations, as well as the practical and ethical implications of our work. The source code will be released publicly.