Linear Backpropagation Leads to Faster ConvergenceDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Convergence analysis, Backpropagation analysis
Abstract: Backpropagation is widely used for calculating gradients in deep neural networks (DNNs). Applied often along with stochastic gradient descent (SGD) or its variants, backpropagation is considered as a de-facto choice in a variety of machine learning tasks including DNN training and adversarial attack/defense. Nevertheless, unlike SGD which has been intensively studied over the past years, backpropagation is somehow overlooked. In this paper, we study the very recent method called ``linear backpropagation'' (LinBP), which modifies the standard backpropagation and can improve the transferability in black-box adversarial attack. By providing theoretical analyses on LinBP in neural-network-involved learning tasks including white-box adversarial attack and model training, we will demonstrate that, somewhat surprisingly, LinBP can lead to faster convergence in these tasks. We will also confirm our theoretical results with extensive experiments.
One-sentence Summary: We study the convergence analysis of linear backpropagation and find it can lead to faster convergence.
Supplementary Material: zip
7 Replies

Loading