Knowledge Swapping via Learning and Unlearning

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We introduce Knowledge Swapping, a novel task designed to selectively regulate knowledge of a pretrained model by enabling the forgetting of user-specified information, retaining essential knowledge, and acquiring new knowledge simultaneously. By delving into the analysis of knock-on feature hierarchy, we find that incremental learning typically progresses from low-level representations to higher-level semantics, whereas forgetting tends to occur in the opposite direction—starting from high-level semantics and moving down to low-level features. Building upon this, we propose to benchmark the knowledge swapping task with the strategy of Learning Before Forgetting. Comprehensive experiments on various tasks like image classification, object detection, and semantic segmentation validate the effectiveness of the proposed strategy. The source code is available at https://github.com/xingmingyu123456/KnowledgeSwapping.
Lay Summary: As artificial intelligence (AI) systems learn more and more, one big challenge is helping them forget certain outdated or unwanted knowledge — without messing up what they already know or need to learn next. Imagine trying to forget your old home address while still remembering your phone number and learning a new one — all at once! In our research, we introduce a new task called Knowledge Swapping, which aims to give AI models the ability to do just that: forget specific information, keep what’s essential, and learn new things at the same time. We studied how AI “thinks,” or more precisely, how it builds up knowledge layer by layer — from simple visual features like edges to more complex ideas like object categories. Interestingly, we found that forgetting starts at the top (complex ideas) and then trickles down to simpler ones. Based on this insight, we propose a learning strategy called Learning Before Forgetting to guide AI systems through this process in a more stable and effective way. We tested our approach across a range of tasks, including recognizing objects in images and identifying different parts of scenes, and found that our method helps AI adapt better — learning new things while responsibly letting go of the old.
Link To Code: https://github.com/xingmingyu123456/KnowledgeSwapping
Primary Area: Deep Learning->Everything Else
Keywords: continual learning, machine unlearning, knowledge swapping
Submission Number: 4248
Loading