Injecting Background Knowledge into Embedding Models for Predictive Tasks on Knowledge GraphsDownload PDF

Dec 11, 2020 (edited Mar 31, 2021)ESWC 2021 ResearchReaders: Everyone
  • Keywords: Knowledge graphs, Embedding Models, Representation learning, Background Knowledge, Link prediction, Triple classification
  • Abstract: Embedding models have been successfully exploited for Knowledge Graph refinement. In these models, the data graph is projected into a lowdimensional space, in which graph structural information are preserved as much as possible, enabling an efficient computation of solutions. We propose a solution for injecting available background knowledge (schema axioms) to further improve the quality of the embeddings. The method has been applied to enhance existing models to produce embeddings that can encode knowledge that is not merely observed but rather derived by reasoning on the available axioms. An experimental evaluation on link prediction and triple classification tasks proves the improvement yielded implementing the proposed method over the original ones.
  • Url: https://docs.google.com/document/d/1dKdg9osi43PLmHCP4d5hmAv_T56Zfv1uMf8rYQD60kw/edit?usp=sharing
  • First Author Is Student: No
  • Subtrack: Machine Learning
14 Replies

Loading