Confirmation: I have read and agree with the workshop's policy on behalf of myself and my co-authors.
Track: long paper (4–8 pages excluding references)
Keywords: sets, graph neural networks, small molecules, drug drug interaction, methodology
TL;DR: We introduce Graph Set Transformer (GST), which achieves performance improvements over cascaded baselines (GNN + DeepSets or Set Transformer) by simultaneously performing node-level feature propagation and set-level contextual modelling.
Abstract: We introduce the Graph Set Transformer (GST), a neural network architecture for learning on sets of graphs, motivated by the assumption that structurally heterogeneous graphs may share higher-level semantics. Existing architectures, including DeepSets and Set Transformer, require pre-encoded graph embeddings from a separate GNN, creating a bottleneck between feature extraction and set-level contextualisation. In contrast, GST avoids this bottleneck by performing node-level feature propagation and cross-graph contextual modelling simultaneously, fusing the two levels of information through a gating mechanism. Across five molecular classification benchmarks, GST achieves consistent ROC-AUC scores of 98.5-99.6% compared to the best baselines of 89.3-98.8% for large sets of cardinalities 10 and 20. In a drug-drug interaction benchmark on sparse, undersampled data, GST improves the F1 score by 17.5% compared to the best baseline.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 94
Loading