Adversarial Patch Defense for Optical Flow Networks in Video Action RecognitionDownload PDF

17 Nov 2022OpenReview Archive Direct UploadReaders: Everyone
Abstract: Deep neural networks (DNN) are being integrated into production-level systems all over the world. Due to their susceptibility to adversarial attacks that degrade their performance, the robustness of DNN models has become a concern for researchers. Recent developments on physically printable adversarial patches showcase their practical effectiveness in fooling DNNs in real-time. Although research on adversarial robustness was majorly focused on image classifiers, recent works have extended the studies in video classification systems. We demonstrate that attacking optical flow estimators in an action recognition system with a patch of size less than 1% of the resolution is sufficient to significantly degrade the performance of the model. We implement an existing defense called Local Gradient Smoothing (LGS) for such localized patch-based attacks in video classification systems. We further address and tackle the shortcomings of LGS, to develop a method called Inpainting with Laplacian Prior (ILP). It was found that ILP can produce accuracy improvement ranging from 3.7% to 37% over the existing defense mechanism LGS while maintaining the same consistency over different patch sizes and models across multiple datasets.
0 Replies

Loading