Hidden Patch Attacks for Optical FlowDownload PDF

Published: 21 Jun 2021, Last Modified: 05 May 2023ICML 2021 Workshop AML PosterReaders: Everyone
Keywords: Adversarial Machine
TL;DR: By adjusting the alpha (transparency) value of the patch during training we can train patches invariant to background and inconspicuous to human observers.
Abstract: Adversarial patches have been of interest to researchers in recent years due to their easy implementation in real world attacks. In this paper we expand upon previous research by demonstrating a new "hidden" patch attack on optical flow. By altering the transparency during training we can generate patches that are invariant to their background meaning they can be inconspicuously applied using a transparent film to any number of objects. This also has the added benefit of reducing training costs when mass producing adversarial objects, since only one trained patch is needed for any application. Although this specific implementation is demonstrated using a white box attack on optical flow, it can be generalized to other scenarios such as object recognition or semantic segmentation.
2 Replies

Loading