Orchestrating Plasticity and Stability: A Continual Knowledge Graph Embedding Framework with Bio-Inspired Dual-Mask Mechanism

Published: 05 Sept 2024, Last Modified: 01 Dec 2024ACML 2024 Conference TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Knowledge graph, Knowledge graph embedding, Graph Continual learning
Verify Author List: I have double-checked the author list and understand that additions and removals will not be allowed after the submission deadline.
TL;DR: To address the catastrophic forgetting problem associated with knowledge growth in real-world applications, we design a bio-inspired dual masked continual knowledge embedding.
Abstract: Learning in biological systems involves the intricate modeling of diverse entities and their interrelations, leading to the evolution of logical knowledge networks with accumulating experience. Analogously, knowledge graphs serve as semantic representations of entity relationships, playing a vital role in natural language processing and graph representation learning. However, contemporary knowledge graph embedding models often neglect real-world event updates, while existing continual knowledge graph research predominantly relies on conventional learning methods that inadequately leverage graph structure, thereby compromising their continual learning capabilities. This study introduces a novel Continual Mask Knowledge Graph Embedding framework (CMKGE), designed to address these limitations. CMKGE integrates semantic attributes, network structure, and continual learning mechanisms to capture the dynamic evolution of knowledge. Inspired by biological signal propagation and Dale's principle, we introduce a dual-mask mechanism for neuronal inhibition and activation. This mechanism automatically filters critical old knowledge, enhancing model plasticity and stability. Through comprehensive evaluations on four datasets, we demonstrate CMKGE's superiority over state-of-the-art continual embedding models.
A Signed Permission To Publish Form In Pdf: pdf
Primary Area: Deep Learning (architectures, deep reinforcement learning, generative models, deep learning theory, etc.)
Paper Checklist Guidelines: I certify that all co-authors of this work have read and commit to adhering to the guidelines in Call for Papers.
Student Author: Yes
Submission Number: 349
Loading