TL;DR: We conduct a rigorous expressiveness study on knowledge graph foundation models and propose a general framework that provably increases their expressive power.
Abstract: Knowledge Graph Foundation Models (KGFMs) are at the frontier for deep learning on knowledge graphs (KGs), as they can generalize to completely novel knowledge graphs with different relational vocabularies. Despite their empirical success, our theoretical understanding of KGFMs remains very limited. In this paper, we conduct a rigorous study of the expressive power of KGFMs. Specifically, we show that the expressive power of KGFMs directly depends on the *motifs* that are used to learn the relation representations. We then observe that the most typical motifs used in the existing literature are *binary*, as the representations are learned based on how pairs of relations interact, which limits the model's expressiveness. As part of our study, we design more expressive KGFMs using richer motifs, which necessitate learning relation representations based on, e.g., how triples of relations interact with each other. Finally, we empirically validate our theoretical findings, showing that the use of richer motifs results in better performance on a wide range of datasets drawn from different domains.
Lay Summary: Knowledge graphs foundation model (KGFM) can understand and conduct predictions on entirely new knowledge graphs, a structured way to represent information with entities (like people or places) and their relationships (like "lives in" or "is part of"). However, we still don’t fully understand why KGFMs work so well. In this paper, we dive into the theory behind them. We find that their success depends on the building blocks they use to understand relationships, what we call “motifs.” Current models mostly look at how two relationships interact, but we show that this is often not enough. By designing models that consider more complex patterns, like interactions between three or more relations, we can build a more expressive model that can distinguish more links in the knowledge graphs that previously could not be distinguished. We test these ideas and find that these new models do better across a variety of real-world datasets.
Link To Code: https://github.com/HxyScotthuang/MOTIF
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Graph Neural Networks, Link Prediction, Expressivity Study, Graph Foundation Models
Submission Number: 5108
Loading