Abstract: Lifelong learning (LL) machines are designed to operate safely in dynamic environments by continually updating their knowledge. Conventional LL paradigms often assume that new data come labeled and that each LL machine has to learn independently from its environment. However, human labeling is expensive and impractical in remote conditions where automation is most desired. We introduce the Peer Parallel Lifelong Learning (PEEPLL) framework for distributed Multi-Agent Lifelong Learning, where agents continually learn online by actively requesting assistance from other agents instead of relying on the expensive environment to teach them. Unlike classical distributed AI, where communication scales poorly, lifelong learners need to communicate only on information they have not yet learned. Additionally, agents reply only if they are highly confident: Our TRUE confidence score uses a compute-efficient application of Variational Autoencoder to quantify confidence in prediction without needing data reconstruction. TRUE outperforms traditional Entropy-based confidence scores, reducing communication overhead by 18.05\% on CIFAR-100 and 5.8\% on MiniImageNet. To improve system resilience to low-quality or adversarial responses, our agents selectively accept a subset of received responses using the REFINE algorithm, which results in a 51.99\% increase in the percentage of correct accepted responses on CIFAR-100 and 25.79\% on MiniImageNet. Like traditional LL agents, PEEPLL agents store a subset of previously acquired knowledge as memory to learn alongside new information to prevent forgetting. We propose a Dynamic Memory-Update mechanism for PEEPLL agents that improves QA's classification performance by 44.17\% on CIFAR-100 and 26.8\% on MiniImageNet compared to the baseline Memory-Update mechanism. Our findings demonstrate that a PEEPLL agent can outperform an LL agent even if the latter has environmental supervision available, thus significantly reducing the need for labeling. PEEPLL provides a framework to facilitate research in distributed multi-agent LL, marking a substantial step towards practical, scalable lifelong learning technologies at the edge.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We made changes suggested by the AE.
Assigned Action Editor: ~Zhanyu_Wang1
Submission Number: 2931
Loading