Abstract: Knowledge graph completion is to infer missing/new entities or relations in knowledge graphs. The long-tail distribution of relations leads to the few-shot knowledge graph completion problem. Existing solutions do not thoroughly solve this problem, with the few training samples still deteriorating knowledge graph completion performance. In this paper, we propose a novel data augmentation mechanism to overcome the learning difficulty caused by few training samples, and a novel feature fusion scheme to reinforce data augmentation. Specifically, we use a conditional generative model to increase the number of entity samples on both entity structure and textual content views, and adaptively fuse entity structural and textual features to get informative entity representations. We then integrate adaptive feature fusion and generative sample augmentation with few-shot relation inference into an end-to-end learning framework. We conduct extensive experiments on five real-world knowledge graphs, showing the significant advantage of the proposed algorithm over state-of-the-art baselines, as well as the effectiveness of the proposed feature fusion and sample augmentation components.
0 Replies
Loading