Continual meta-learning algorithmDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 15 May 2023Appl. Intell. 2022Readers: Everyone
Abstract: Deep learning has accomplished impressive excellence in many fields. However, its achievement relies on a vast amount of marker data and when there is insufficient labeled data, the phenomenon of over-fitting will occur. On the other hand, the real world tends to be so non-stationary that neural networks cannot learn continuously like humans. The specific manifestation is that learning new tasks leads to a significant decrease in its performance on old tasks. In responding to the above problem, this paper proposes a new algorithm CMLA (Continual Meta-Learning Algorithm) based on meta-learning. CMLA cannot only extract the key features of the sample, but also optimize the update method of the task gradient by introducing the cosine similarity judgment mechanism. The algorithm is tested on miniImageNet and Fewshot-CIFAR100 (Canadian Institute For Advanced Research), and the outcome clearly reveals the effectiveness and superiority of the CMLA in comparison with other advanced systems. Especially compared to MAML (Model-Agnostic Meta-Learning) with standard four-layer convolution, the accuracy of 1 shot and 5 shot is improved by 15.4% and 16.91% respectively under the setting of 5-way on miniImageNet. CMLA not only reduces the instability of the adaptation process, but also solves the stability-plasticity dilemma to a certain extent, achieving the goal of continual learning.
0 Replies

Loading