Optimal Input Gain: All You Need to Supercharge a Feed-Forward Neural Network

TMLR Paper1542 Authors

04 Sept 2023 (modified: 14 Dec 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Linear transformation of the inputs alters the training performance of feed-forward networks that are otherwise equivalent. However, most linear transforms are viewed as a pre-processing operation separate from the actual training. Starting from equivalent networks, it is shown that pre-processing inputs using linear transformation are equivalent to multiplying the negative gradient matrix with an autocorrelation matrix per training iteration. Second order method is proposed to find the autocorrelation matrix that maximizes learning in a given iteration. When the autocorrelation matrix is diagonal, the method optimizes input gains. This optimal input gain (OIG) approach is used to improve two first-order two-stage training algorithms, namely back-propagation (BP) and hidden weight optimization (HWO), which alternately update the input weights and solve linear equations for output weights. Results show that the proposed OIG approach greatly enhances the performance of the first-order algorithms, often allowing them to rival the popular Levenberg-Marquardt approach with far less computation. Since HWO is equivalent to BP with Whitening transformation applied to the inputs, OIG improved HWO could be a significant building block to more complex deep learning architectures.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Tim_Genewein1
Submission Number: 1542
Loading