Modeling Transitivity and Cyclicity in Directed Graphs via Binary Code Box EmbeddingsDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 15 Jan 2023, 01:46NeurIPS 2022 AcceptReaders: Everyone
Keywords: graph representation learning, geometric representation learning, directed graphs, cyclic graphs, transitivity
Abstract: Modeling directed graphs with differentiable representations is a fundamental requirement for performing machine learning on graph-structured data. Geometric embedding models (e.g. hyperbolic, cone, and box embeddings) excel at this task, exhibiting useful inductive biases for directed graphs. However, modeling directed graphs that both contain cycles and some element of transitivity, two properties common in real-world settings, is challenging. Box embeddings, which can be thought of as representing the graph as an intersection over some learned super-graphs, have a natural inductive bias toward modeling transitivity, but (as we prove) cannot model cycles. To this end, we propose binary code box embeddings, where a learned binary code selects a subset of graphs for intersection. We explore several variants, including global binary codes (amounting to a union over intersections) and per-vertex binary codes (allowing greater flexibility) as well as methods of regularization. Theoretical and empirical results show that the proposed models not only preserve a useful inductive bias of transitivity but also have sufficient representational capacity to model arbitrary graphs, including graphs with cycles.
Supplementary Material: pdf
12 Replies

Loading