Retrospection: Leveraging the Past for Efficient Training of Deep Neural NetworksDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: A retrospection loss that enables networks to leverage past parameter states as guidance during training to improve performance
Abstract: Deep neural networks are powerful learning machines that have enabled breakthroughs in several domains. In this work, we introduce retrospection loss to improve the performance of neural networks by utilizing prior experiences during training. Minimizing the retrospection loss pushes the parameter state at the current training step towards the optimal parameter state while pulling it away from the parameter state at a previous training step. We conduct extensive experiments to show that the proposed retrospection loss results in improved performance across multiple tasks, input types and network architectures.
Code: https://github.com/iclr-retrospection/retrospection
Keywords: Deep Neural Networks, Supervised Learning, Classification, Training Strategy, Generative Adversarial Networks, Convolutional Neural Networks
Original Pdf: pdf
9 Replies

Loading