HAGGLE: Get a better deal using a Hierarchical Autoencoder for Graph Generation and Latent-space Expressivity
Track: Full Paper (8 pages)
Keywords: graph generation, graph neural networks, graph embeddings, structural preservation
TL;DR: Proposal of a new hierarchical approach for graph generation that addresses deficiencies of current generative capabilities.
Abstract: Generating realistic and diverse graph structures is a challenge with broad applications across various scientific and engineering disciplines. A common approach involves learning a compressed latent space where graphs are represented by a collection of node-level embeddings, often via methods such as a Graph Autoencoder (GAE). A fundamental challenge arises when we try to generate new graphs by sampling from this space. While many deep learning methods like Diffusion, Variational Autoencoders (VAEs), and Generative Adversarial Networks (GANs) can successfully generate new points in the latent space, they fail to capture the inherent relational dependencies between the node embeddings. This leads to decoded graphs that lack structural coherence and fail to replicate essential real-world properties. Alternatively, generating a single graph-level embedding and then decoding it to new node embeddings is also fundamentally limited, as pooling methods needed to create the graph level embedding are inherently lossy and discard crucial local structural information. We present a three-stage hierarchical framework called Hierarchical Autoencoder for Graph Generation and Latent-space Expressivity (HAGGLE) that addresses these limitations through systematic bridging of node-level representations with graph-level generation. The framework trains a Graph Autoencoder for node embeddings, employs a Pooling Autoencoder for graph-level compression, and utilizes a size-conditioned GAN for new graph generation. This approach generates structurally coherent graphs while providing useful graph-level embeddings for downstream tasks.
Submission Number: 14
Loading