TrackPGD: Efficient Adversarial Attack using Object Binary Masks against Robust Transformer Trackers

Published: 15 Oct 2024, Last Modified: 29 Dec 2024AdvML-Frontiers 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: object tracking, adversarial attack, white-box, segmentation
TL;DR: TrackPGD is a novel white-box adversarial attack that leverages predicted object binary masks to effectively mislead robust transformer-based object trackers like MixFormerM, OSTrackSTS, RTS, and TransT-SEG.
Abstract: Adversarial perturbations can deceive neural networks by adding small, imperceptible noise to the input. Recent object trackers with transformer backbones have shown strong performance on tracking datasets, but their adversarial robustness has not been thoroughly evaluated. While transformer trackers are resilient to black-box attacks, existing white-box adversarial attacks are not universally applicable against these new transformer trackers due to differences in backbone architecture. In this work, we introduce TrackPGD, a novel white-box attack that utilizes predicted object binary masks to target robust transformer trackers. Built upon the powerful segmentation attack SegPGD, our proposed TrackPGD effectively influences the decisions of transformer-based trackers. Our method addresses two primary challenges in adapting a segmentation attack for trackers: limited class numbers and extreme pixel class imbalance. TrackPGD uses the same number of iterations as other attack methods for tracker networks and produces competitive adversarial examples that mislead transformer and non-transformer trackers such as MixFormerM, OSTrackSTS, TransT-SEG, and RTS on datasets including VOT2022STS, DAVIS2016, UAV123, and GOT-10k.
Submission Number: 2
Loading