Abstract: Deep neural networks are vulnerable to adversarial examples. Although the adversarial example has superior white-box attack success rate, its transferability is poor under the black-box setting. Momentum is often integrated into attacks so as to prevent adversarial examples from overfitting the source model and improve the transferability of adversarial examples. How-ever, conventional momentum merely accumulates few gradients during the early iterations, resulting in the early adversarial examples already overfitting the source model. Therefore, we propose Experienced Momentum (EM), which is trained on a set of models derived by Random Channels Swapping (RCS). Since EM takes the direction of loss increasing for multiple models into account, assigning EM to the initial value of momentum to makes adversarial examples transferable across models during the early iterations. Moreover, conventional Nesterov momentum only take the previous gradients into consideration but ignore the gradient of the current data point during the whole pre-update, making the estimate of the next position imprecise. It prompts us to propose Precise Nesterov momentum (PN), which not only retains the looking-ahead property but also adopts the gradient of the current data point during the pre-update. To further improve transferability, we combine EM and PN as Experienced Precise Nesterov momentum (EPN). Extensive experiments on the ImageNet dataset against normally trained and defense models demonstrate that the proposed EPN is more effective than conventional momentum for improving transferability.
0 Replies
Loading