Adversarial Instance Attacks for Interactions between Human and Object

20 Sept 2023 (modified: 08 Apr 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Supplementary Material: pdf
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Adversarial Attacks; Human Object Interaction
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Adversarial attacks can easily deceive deep neural networks (DNNs); however, they are also valuable for evaluating the robustness of DNNs. Existing attacks on object detection primarily focus on attacking the recognition of objects, while whether the attacks remain effective on more complex scene understanding tasks (e.g., extracting the interaction between objects) remains largely unexplored. In this paper, we, for the first time, propose Adversarial Instance Attacks, a novel black-box attacking framework for scene interactions without interfering with object detections. To achieve the goal, we first introduce an Interaction Area Sampling module that identifies vulnerable anchors (area) for adversarial instances positioning. Secondly, we design an Object Category Search module and build an interaction co-occurrence knowledge graph to explore categories with higher obfuscation scores toward specific object-interaction pairs. Finally, our framework generates perturbations that serve as adversarial instances with high co-occurrence obfuscation towards specific interactions in vulnerable areas and deceive HOI models. Extensive experiments conducted against multiple models demonstrate effectiveness in attacking interactions of HOI. Our approach surpasses existing methods by significant margins, achieving an improvement of at least +10.36%}.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2189
Loading