Brain-inspired Multi-View Incremental Learning for Knowledge Transfer and Retention

27 Sept 2024 (modified: 22 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Brain-inspired Knowledge Transfer; Hebbian Learning; Multi-view Incremental Learning; Orthogonal Projection
Abstract: The human brain exhibits remarkable proficiency in dynamic learning and adaptation, seamlessly integrating prior knowledge with new information, thereby enabling flexible memory retention and efficient transfer across multiple views. In contrast, traditional multi-view learning methods are predominantly designed for static and fixed-view datasets, leading to the notorious "view forgetting phenomenon", where the introduction of new views leads to the erosion of prior knowledge. This phenomenon starkly contrasts with the brain’s remarkable ability to continuously integrate and migrate past knowledge, ensuring both the retention of old information and the assimilation of new insights. This oversight presents a critical challenge: how to efficiently learn and integrate new views while simultaneously preserving knowledge from previously acquired views and enabling flexible knowledge transfer across diverse perspectives.Inspired by underlying neural processing mechanisms, we propose a view transfer learning framework named Hebbian View Orthogonal Projection (HVOP), which realizes efficient knowledge migration and sharing between multi-view data. HVOP constructs a knowledge transfer space (KTS), where the KTS reduces the interference between the old and the new views through an orthogonal learning mechanism. By further incorporating recursive lateral connections and Hebbian learning, the proposed model endows the learning process with brain-like dynamic adaptability, enhancing knowledge transfer and integration, and bringing the model closer to human cognition. We extensively validate the proposed model on node classification tasks and demonstrate its superior performance in knowledge retention and transfer compared to traditional methods. Our results underscore the potential of biologically inspired mechanisms in advancing multi-view learning and mitigating the view forgetting phenomenon.
Supplementary Material: zip
Primary Area: applications to neuroscience & cognitive science
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10317
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview