OmniPatch: A Universal Adversarial Patch for ViT-CNN Cross-Architecture Transfer in Semantic Segmentation
Keywords: Vision Transformer Robustness, Autonomous Driving, Computer Vision, Adversarial ML, Black-Box Attacks, Transferable Adversarial Attacks, Trustworthy AI, Ensemble Training, AI Safety, AI Security, AI Robustness, Semantic Segmentation
TL;DR: OmniPatch learns a single patch-based universal adversarial perturbation effective across images and both ViT and CNNs using sensitive region placement, two-stage ViT and CNN surrogate training with gradient alignment, and auxiliary regularizers.
Abstract: Robust semantic segmentation is crucial for safe autonomous driving, yet deployed models remain vulnerable to black-box adversarial attacks when target weights are unknown. Most existing approaches either craft image-wide perturbations or optimize patches for a single architecture, which limits their practicality and transferability. We introduce $\textbf{OmniPatch}$, a training framework for learning a $\textit{universal adversarial patch}$ that generalizes across images and both ViT and CNN architectures without requiring access to target model parameters.
Submission Number: 303
Loading