Is Graph Mixup Beneficial? Investigating Interpolation And Empirical Performance of Graph Mixup Methods

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, Data Augmentation, Mixup, Graph Classification, Graph Edit Distance, Representation Learning, Evaluation
TL;DR: We investigate graph mixup and analyze prediction performance as well as interpolation properties of prior methods empirically.
Abstract: Mixup is a widely used data augmentation technique that constructs new training examples by interpolating between existing ones. While simple and effective in domains like vision and language, applying mixup to graph data is non-trivial and independent empirical evidence for its effectiveness is lacking. To fill this gap, we conducted an independent evaluation following established evaluation protocols for graph classification and found that none of the state-of-the-art mixup methods yielded statistically significant improvements over the no-mixup baseline. To obtain further insights, we analyzed the graphs generated from existing mixup methods from an interpolation perspective using the graph edit distance. We found that (i) many mixup methods failed to interpolate well, (ii) high interpolation error led to performance degradation, and (iii) even optimal interpolation did not lead to performance improvements. Our findings highlight the need for a more rigorous exploration and evaluation of mixup for graphs.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 16401
Loading