Negative-sample-free knowledge graph embedding

Published: 01 Jan 2024, Last Modified: 16 May 2025Data Min. Knowl. Discov. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recently, knowledge graphs (KGs) have been shown to benefit many machine learning applications in multiple domains (e.g. self-driving, agriculture, bio-medicine, recommender systems, etc.). However, KGs suffer from incompleteness, which motivates the task of KG completion which consists of inferring new (unobserved) links between existing entities based on observed links. This task is achieved using either a probabilistic, rule-based, or embedding-based approach. The latter has been shown to consistently outperform the former approaches. It however relies on negative sampling, which supposes that every observed link is “true” and that every unobserved link is “false”. Negative sampling increases the computation complexity of the learning process and introduces noise in the learning. We propose NSF-KGE, a framework for KG embedding that does not require negative sampling, yet achieves performance comparable to that of the negative sampling-based approach. NSF-KGE employs objectives from the non-contrastive self-supervised literature to learn representations that are invariant to relation transformations (e.g. translation, scaling, rotation etc) while avoiding representation collapse.
Loading