Effect of Optimization Techniques on Feedback Alignment Learning of Neural NetworksDownload PDFOpen Website

Published: 2023, Last Modified: 12 May 2023ICAIIC 2023Readers: Everyone
Abstract: The error backpropagation algorithm is a representative learning method that has been used in most deep network models. However, the error backpropagation algorithm, despite its decent performance, clearly has limits to its biological plausibility. Unlike the learning mechanism of the actual brain, the error backpropagation algorithm must reuse the weights used in the forward calculation for the backward error propagation. In order to overcome these limitations, the feedback alignment method, which uses a fixed random weight for the backpropagation computation, was proposed. The feedback alignment algorithm showed performances comparable to the original error backpropagation on several benchmark data sets. However, it is still in the preliminary stage of analysis, and various analysis on its learning behavior and practical efficiency are needed. In this paper, we combine feedback alignment learning method with popular optimization techniques such as RMSprop and Adam, and investigate its effect on the learning performances through computational experiments on benchmark data sets.
0 Replies

Loading