Revisiting Embeddings for Graph Neural NetworksDownload PDF

Published: 24 Nov 2022, Last Modified: 05 May 2023LoG 2022 PosterReaders: Everyone
Keywords: Graph Attention, Embeddings, Pretrained Models, Transfer Learning
TL;DR: We question current graph neural network embedding quality and whether available graph datasets are suitable for testing GNNs
Abstract: Current graph representation learning techniques use Graph Neural Networks (GNNs) to extract features from dataset embeddings. In this work, we examine the quality of these embeddings and assess how changing them can affect the accuracy of GNNs. We explore different embedding extraction techniques for both images and texts; and find that the performance of different GNN architectures is dependent on the embedding style used. We see a prevalence of bag of words (BoW) embeddings and text classification tasks in available graph datasets. Given the impact embeddings have on GNN performance this leads to GNNs being optimised for BoW vectors rather than general graph representational learning.
Type Of Submission: Extended abstract (max 4 main pages).
PDF File: pdf
Supplementary Materials: zip
Type Of Submission: Extended abstract.
6 Replies

Loading