Abstract: Deep neural networks have achieved exceptional performance across a wide range of applications but remain susceptible to adversarial attacks. While most prior research has focused on single-task scenarios, increasing attention is being directed toward adversarial attacks targeting multiple tasks simultaneously. However, existing methods often fail to balance attack performance across tasks in a multi-task model. These approaches typically aim to maximize the model’s overall loss, neglecting task-specific attack difficulties, which results in imbalanced attack performance among tasks. To address this challenge, we propose a novel multi-task adversarial attack method that ensures robust and balanced attack performance across multiple tasks. Our approach dynamically updates task-specific weighting factors through a min-max optimization during the attack, optimizing the worst-case attack performance across all tasks. Experimental results demonstrate that our method significantly enhances the worst-case attack performance across diverse datasets and attack strategies compared to existing approaches. By dynamically adjusting the attack intensity on the least vulnerable tasks, the min-max optimization significantly improves overall attack effectiveness as well as the worst-case performance by balancing the task weights.
Loading