Discrete Graph Auto-Encoder

Published: 02 Apr 2024, Last Modified: 02 Apr 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Despite advances in generative methods, accurately modeling the distribution of graphs remains a challenging task primarily because of the absence of predefined or inherent unique graph representation. Two main strategies have emerged to tackle this issue: 1) restricting the number of possible representations by sorting the nodes, or 2) using permutation-invariant/equivariant functions, specifically Graph Neural Networks (GNNs). In this paper, we introduce a new framework named Discrete Graph Auto-Encoder (DGAE), which leverages the strengths of both strategies and mitigate their respective limitations. In essence, we propose a strategy in 2 steps. We first use a permutation-equivariant auto-encoder to convert graphs into sets of discrete latent node representations, each node being represented by a sequence of quantized vectors. In the second step, we sort the sets of discrete latent representations and learn their distribution with a specifically designed auto-regressive model based on the Transformer architecture. Through multiple experimental evaluations, we demonstrate the competitive performances of our model in comparison to the existing state-of-the-art across various datasets. Various ablation studies support the interest of our method.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: 1808
Changes Since Last Submission: Re-compile the document to fit the margin requirement
Code: https://github.com/yoboget/dgae
Supplementary Material: pdf
Assigned Action Editor: ~Giannis_Nikolentzos1
Submission Number: 1824