Abstract: In recent years, a multitude of studies have aimed to enhance the capabilities of pretrained language models for handling Knowledge Graph Completion (KGC) tasks by integrating structural information from knowledge graphs. However, existing approaches have not effectively combined the structural attributes of knowledge graphs with textual descriptions of entities to generate entity encodings. To address this issue, we introduce MoCoKGC (Momentum Contrast Entity Encoding for Knowledge Graph Completion), incorporating three primary encoders: the entity-relation encoder, the entity encoder, and the momentum entity encoder.Through a slowly updated entity encoding mechanism, we maintain a negative sample queue and characterize the entity neighborhood.On the standard evaluation metric, Mean Reciprocal Rank (MRR), the MoCoKGC model demonstrates superior performance, achieving a 7.1\% improvement on the WN18RR dataset and an 11\% improvement on the Wikidata5M dataset, while also surpassing the current best model on the FB15k-237 dataset. Through a series of experiments, this paper deeply studies the role and contribution of each component and parameter of the model.
Paper Type: long
Research Area: NLP Applications
Contribution Types: NLP engineering experiment
Languages Studied: English
0 Replies
Loading