Prediction Accuracy and Adversarial Robustness of Error-Based Input Perturbation Learning

Published: 2024, Last Modified: 13 Nov 2024ICAIIC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Error backpropagation algorithms are essential for training deep neural networks, but they have several problems due to sequential feedback calculation to propagate error signals. Recently, a method using only two consecutive forward calculation with input perturbation has been proposed as an alternative, which is called PEPITA. Although PEPITA has shown the possibility of successful learning without backward computation, it is still in its early stages and needs further investigation on its properties. In this study, we analyze the characteristics of PEPITA and propose a new method for generating modulated input, specifically for the second forward computation. In particular, we show that the adversarial perturbation used to generate attack samples is closely related to the input perturbation process of PEPITA, and propose to use the adversarial perturbation in combination with PEPITA learning. The potential of the existing PEPITA and the proposed modification is analyzed through experiments using different activation functions under various attack conditions. From the experiments, we confirm that a proper combination of input modulation and activation function can improve the prediction accuracy and adversarial robustness. This work extends the applicability of PEPITA and lays the foundation for the analysis of alternative learning algorithms.
Loading