An Efficient Method of Lifelong Learning with Differentiable Memory for Edge Computing

Published: 01 Jan 2025, Last Modified: 05 Nov 2025HICSS 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The proliferation of deep learning leads to great success in various domains such as computer vision, natural language processing, and even edge computing. However, it often comes at the cost of substantial computational power, memory usage, and extensive data requirements, posing challenges for edge computing with limited resources. This paper presents a novel, memory-efficient incremental learning method optimized for edge computing. By harnessing differentiable memory storage and lifelong learning principles, the proposed method facilitates efficient concurrent learning and storage of knowledge, significantly reducing the need for extensive retraining while preserving privacy and enhancing space efficiency. Experimental results on three benchmark datasets demonstrate that the proposed method yields accuracy gains reaching up to 7%p, approximately 47% improvement in space efficiency and 36% improvement in time efficiency against the SOTA methods. It surpasses conventional methods in accuracy, space efficiency, and learning time, making it a robust solution for continual learning in resource-constrained environments.
Loading