Is Graph Mixup Beneficial? Investigating Interpolation And Empirical Performance of Graph Mixup Methods

ICLR 2026 Conference Submission16401 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, Data Augmentation, Mixup, Graph Classification, Graph Edit Distance, Representation Learning, Evaluation
TL;DR: We investigate graph mixup and analyze prediction performance as well as interpolation properties of prior methods empirically.
Abstract: Mixup is a widely used data augmentation technique that constructs new training examples by interpolating between existing ones. While effective in domains like vision and language, applying mixup to graph data is challenging. In this paper, we analyze and empirically explore state-of-the-art graph mixup methods. We conducted an independent evaluation following established evaluation protocols for graph classification and found that none of the mixup methods yielded statistically significant improvements over the no-mixup baseline. To obtain further insights, we analyzed the graphs generated from existing mixup methods from an interpolation perspective using the graph edit distance. We found that (i) many mixup methods failed to interpolate well, (ii) that mixup methods that interpolated well often outperform methods that did not, (iii) even optimal interpolation did not lead to performance improvements. Our findings highlight the need for a more rigorous exploration and evaluation of mixup for graphs.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 16401
Loading