Progressive prune network for memory efficient continual learning

Joel Nicholls, Sakyasingha Dasgupta

Feb 12, 2018 ICLR 2018 Workshop Submission readers: everyone Show Bibtex
  • Abstract: We present a method for the transfer of knowledge between tasks in memory-constrained devices. In this setting, the per-parameter performance over multiple tasks is a critical objective. Specifically, we consider continual training and pruning of a progressive neural network. This type of multi-task network was introduced in Rusu et al. (2016)a, which optimised for performance, while the number of parameters grew quadratically with the number of tasks. Our preliminary results demonstrates that it is possible to limit the parameter growth to be linear, while still achieving a performance boost, and sharing knowledge across different tasks.
  • Keywords: Deep learning, transfer learning, compression, classification
0 Replies

Loading