Abstract: Deep-learning models need to continually accumulate knowledge from tasks, given that the number of tasks are increasing overwhelmingly as the digital world evolves. However, standard deep-learning models are prone to forgetting about previously acquired skills when learning new ones. Fortunately, this catastrophic forgetting problem can be solved by means of continual learning. One popular approach in this vein is regularization-based method which penalizes parameters by giving their importance. However, a formal definition of parameter importance and theoretical analysis of regularization-based methods are elements that remain under-explored. In this paper, we first rigorously define the parameter importance by influence function, then unify the seminal methods (i.e., EWC, SI and MAS) into one whole framework. Two key theoretical results are presented in this work, and extensive experiments are conducted on standard benchmarks, which verify the superior performance of our proposed method.
Loading