Mixture of Variational Graph Autoencoders

Published: 2024, Last Modified: 11 Nov 2025S+SSPR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Autoencoders, a type of unsupervised model, are capable of learning effective latent representations of data without supervision, only requiring the decoder to be able to reconstruct the original data-point from its latent representation obtained through the encoder. When dealing with structured data, graph-autoencoders are one of the few effective ways to obtain such latent representations of graphs. However, when dealing with large, diverse graph datasets, autoencoders struggle to adapt to varying structures, leading to suboptimal encoding. In our paper, we introduce a novel approach called Mixture of Variational Graph Autoencoders, which addresses this limitation by introducing a mixture of encoder/decoder models which provide multiple local and class-specific models that better adapt to different patches of the data-space. An exhaustive experimental evaluation shows that our approach greatly outperforms the state of the art in reconstruction precision (Code: https://github.com/gdl-unive/MVGAE).
Loading