Progressive prune network for memory efficient continual learningDownload PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: We present a method for the transfer of knowledge between tasks in memory-constrained devices. In this setting, the per-parameter performance over multiple tasks is a critical objective. Specifically, we consider continual training and pruning of a progressive neural network. This type of multi-task network was introduced in Rusu et al. (2016)a, which optimised for performance, while the number of parameters grew quadratically with the number of tasks. Our preliminary results demonstrates that it is possible to limit the parameter growth to be linear, while still achieving a performance boost, and sharing knowledge across different tasks.
Keywords: Deep learning, transfer learning, compression, classification
4 Replies

Loading