Abstract: Data sets used in recent work on graph similarity scoring and matching tasks suffer from significant limitations. Using Graph Edit Distance (GED) as a showcase, we highlight pervasive issues such as train-test leakage and poor generalization, which have misguided the community's understanding and assessment of the capabilities of a method or model.
These limitations arise, in part, because preparing labeled data is computationally expensive for combinatorial graph problems.
We establish some key properties of GED that enable scalable data augmentation for training, and adversarial test set generation.
Together, our analysis, experiments and insights establish
new, sound guidelines for designing and evaluating future neural networks, and suggest open challenges for future research.
Lay Summary: We look at how current machine learning systems compare graphs, which is important in tasks like finding similar molecules or detecting fraud. We find that many widely-used datasets contain hidden overlaps between training and test data, sometimes nearing 100%, making it unclear whether models are truly learning meaningful patterns or just memorizing. We propose new ways to fix this and ensure fairer, more meaningful evaluation.
Link To Code: https://anonymous.4open.science/r/better-graph-matching-7146/README.md
Primary Area: Data Set Creation, Curation, and Documentation
Keywords: Improved benchmarking of GED tasks
Submission Number: 457
Loading