United We Stand, Divided We Fall: Networks to Graph (N2G) Abstraction for Robust Graph Classification under Graph Label Corruption

Published: 18 Nov 2023, Last Modified: 30 Nov 2023LoG 2023 PosterEveryoneRevisionsBibTeX
Keywords: Representation Learning, Classification
TL;DR: We propose a new graph of graph abstraction for enhancing learning of graphs with perturbed labels.
Abstract: Nowadays, graph neural networks (GNN) are the primary machinery to tackle (semi)-supervised graph classification tasks. The aim here is to predict classes for unlabeled graphs, given a collection of graphs with known labels. However, in many real-world applications the available information on graph classes may be distorted either due to incorrect labelling process (e.g., as in biochemistry and bioinformatics) or may be subject to targeted attacks (e.g., as in network-based customer attrition analytics). Over the past few years, the increasing number of studies has indicated that GNNs are prone both to noisy node and noisy graph labels, and while this problem has received noticeable attention for node classification tasks, vulnerability of GNNs for graph classification with perturbed graph labels still remains in its nascence. We hypothesise that this challenge can be addressed by the universal principle {\it United We Stand, Divided We Fall}. In particular, most GNNs view each graph as a standalone entity and, as a result, are limited in their abilities to account for complex interdependencies among the graphs. Inspired by the recent studies on molecular graph learning, we propose a new robust knowledge representation called {\it Networks to Graph} (N2G). The key N2G idea is to construct a new abstraction where each graph in the collection is now represented by a node, while an edge then reflects some sort of similarity among the graphs. As a result, the graph classification task can be then naturally reformulated as a node classification problem. We show that the proposed N2G representation approach does not only improve classification performance both in binary and multi-class scenarios but also substantially enhances robustness against noisy labels in the training data, leading to relative robustness gains up to 11.7\% on social network benchmarks and up to 25.8\% on bioinformatics graph benchmarks under 10\% of graph label corruption rate.
Submission Type: Full paper proceedings track submission (max 9 main pages).
Agreement: Check this if you are okay with being contacted to participate in an anonymous survey.
Poster: jpg
Poster Preview: png
Submission Number: 32
Loading