Joint semantic embedding with structural knowledge and entity description for knowledge representation learning
Abstract: Previous works mainly employ triple structural information in learning representations for knowledge graph, which results in poor performance of link prediction especially when it predicts new or few-fact entities. It is intuitive to introduce text information to supplement the missing semantic information for knowledge representation. However, existing methods only make alignment on the level of word or score function, and have not yet considered textual and structural information. Moreover, since the textual information is potentially redundant to represent the entities, how to extract relevant information and simultaneously alleviate the irrelevant information contained in the text is a challenging task. To tackle the above problems, this paper proposes a novel knowledge representation learning framework of joint semantic embedding using structural knowledge and entity description (JointSE). Firstly, we design a mutual attention mechanism to filter the effective information of fact triples and entity descriptions with respect to specific relationships. Secondly, we project the triples into the text semantic space using dot product to connect the triple with the relevant text description. In addition, we enhance triple-based entity representation and text-based entity representation using graph neural network to capture more useful graph structure information. Finally, extensive experiments on benchmark datasets and Chinese legal provisions dataset demonstrate that JointSE realizes the effective fusion of triple information, text semantic information, and graph structure information. We observe that JointSE is superior to previous methods in entity prediction and relationship prediction tasks.
Loading