Abstract: Learning multiple sequentially arriving tasks without forgetting previous knowledge, known as Continual Learning (CL), remains a long-standing challenge for neural networks. Most existing CL methods rely on data replay. However, they are not applicable when past data is unavailable or is not allowed to be synthetically generated. To address this challenge, we propose Sparification and Expansion-based Continual Learning (SECL). SECL avoids forgetting of previous tasks by ensuring the stability of the CNN via a stability regularization term, which prevents filters detected as important for past tasks to deviate too much when learning a new task. On top of that, SECL makes the network plastic via a plasticity regularization term that leverage the over-parameterization of CNNs to efficiently sparsify the network and tunes unimportant filters making them relevant for future tasks. Also, SECL enhances the plasticity of the network through a simple but effective heuristic mechanism that aut
Loading