Retrospection: Leveraging the Past for Efficient Training of Deep Neural Networks

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • TL;DR: A retrospection loss that enables networks to leverage past parameter states as guidance during training to improve performance
  • Abstract: Deep neural networks are powerful learning machines that have enabled breakthroughs in several domains. In this work, we introduce retrospection loss to improve the performance of neural networks by utilizing prior experiences during training. Minimizing the retrospection loss pushes the parameter state at the current training step towards the optimal parameter state while pulling it away from the parameter state at a previous training step. We conduct extensive experiments to show that the proposed retrospection loss results in improved performance across multiple tasks, input types and network architectures.
  • Code: https://github.com/iclr-retrospection/retrospection
  • Keywords: Deep Neural Networks, Supervised Learning, Classification, Training Strategy, Generative Adversarial Networks, Convolutional Neural Networks
0 Replies

Loading