Target Propagation via Regularized Inversion for Recurrent Neural Networks

Published: 01 Feb 2023, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Target Propagation (TP) algorithms compute targets instead of gradients along neural networks and propagate them backward in a way that is similar to yet different than gradient back-propagation (BP). The idea initially appeared as a perturbative alternative to BP that may improve gradient evaluation accuracy when training multi-layer neural networks (LeCun, 1985) and has gained popularity as a biologically plausible counterpart of BP. However, there have been many variations of TP, and a simple version of TP still remains worthwhile. Revisiting the insights of LeCun (1985) and Lee et al (2015), we present a simple version of TP based on regularized inversions of layers of recurrent neural networks. The proposed TP algorithm is easily implementable in a differentiable programming framework. We illustrate the algorithm with recurrent neural networks on long sequences in various sequence modeling problems and delineate the regimes in which the computational complexity of TP can be attractive compared to BP.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=Q5vdEJyhA8
Changes Since Last Submission: We incorporated the comments done by the action editor to finalize the camera ready.
Code: https://github.com/vroulet/tpri
Assigned Action Editor: ~Guido_Montufar1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 262
Loading