Abstract: Adversarial attack is a technique that introduces small and imperceptible perturbations into input data to force deep neural networks to make incorrect predictions. This not only helps assess model robustness and security but also reveals potential vulnerabilities, providing a foundation for optimizing defense mechanisms. However, single-design attack models often struggle to cope with complex and evolving defense strategies. In addition, traditional methods that rely on manual parameter tuning are inadequate for capturing internal model information in black-box scenarios, making it difficult to maintain efficiency and transferability across diverse target models and data distributions. To address this, this paper proposes a Cascade Adversarial Attack Search approach based on multi-objective optimization strategies, named CAAS. Specifically, this method constructs a comprehensive search space encompassing various attack algorithms, models, and their hyperparameter combinations. It employs a cascade strategy to sequentially apply multiple attack techniques, aiming to improve transfer attack success rates while reducing attack costs. Experimental results demonstrate that when tested on ten randomly selected models, CAAS not only significantly improves attack success rates, but also effectively controls attack costs, showcasing its superior performance in the field of adversarial attacks.
External IDs:dblp:conf/ijcnn/WangSSYYT25
Loading