Attention Based Variational Graph Auto-Encoder (AVGAE)Download PDF

01 Mar 2023 (modified: 11 Apr 2023)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: autoencoder, self-attention, graphs, generative modeling.
TL;DR: Introducing attention with Variational Graph Auto-encoder to strengthen the representation power of the inference model.
Abstract: Recently techniques such as VGAEs (Variational Graph Autoencoder) are quite popular in the unsupervised task setting and in generative modeling. Unlike conventional autoencoders, which typically use fully-connected layers to learn a latent representation of input data, VGAEs operate on graph-structured data. We propose to incorporate attention in VGAEs (AVGAE) for capturing the relationships better thereby increasing the robustness and generalisability. In a VAE, the encoder network learns to map input data to a lower-dimensional latent space, while the decoder network learns to map latent space vectors back to the original input data. Unlike traditional autoencoders, which typically use a fixed encoding function, VAEs use a probabilistic encoding function that maps input data to a probability distribution over the latent space. They have been shown to improve the quality of the generated output, particularly for tasks where the input data is complex and high-dimensional.
5 Replies

Loading