Abstract: Inspired by recent progress in text-conditioned image generation, we propose a model for the problem of text-conditioned graph generation. We introduce the Vector Quantized Text to Graph generator (VQ-T2G), a discrete graph variational autoencoder and autoregressive transformer for generating general graphs conditioned on text. We curate two multimodal datasets of graph-text pairs, a real-world dataset of subgraphs from the Wikipedia link network and a dataset of diverse synthetic graphs. Experimental results on these datasets demonstrate that VQ-T2G synthesises novel graphs with structure aligned with the text conditioning. Additional experiments in the unconditioned graph generation setting show VQ-T2G is competitive with existing unconditioned graph generation methods across a range of metrics.
Loading