Keywords: Model Extraction Attack, Model Extraction Defense, Graph Learning, IP Protection
Abstract: Graph machine learning models support applications in recommendation, finance and biomedicine, yet their parameters and training data are proprietary assets that face threats such as model extraction, inversion and membership inference. Prior work proposes many attacks and defenses, but comparisons remain unreliable because experiments use inconsistent datasets, threat models and evaluation metrics. We introduce \emph{GraphIP-Bench}, a unified benchmark and library that provides standardized datasets, reference implementations of diverse attack and defense families and a common evaluation protocol. The benchmark specifies clear metrics for extraction fidelity, task utility and computational cost, which together measure the protection–utility trade-off under a prescribed query-access threat model. Empirical analysis across citation, coauthor and commercial graphs shows that several watermarking methods preserve utility and enable ownership verification with different cost profiles, while data-free extraction lags behind data-driven extraction even at large budgets. To the best of our knowledge, this is the first benchmark that standardizes the rigorous evaluation of model-extraction attacks and defenses for graph neural networks. Our implementation is publicly available at: \href{https://anonymous.4open.science/r/GraphIPBench-7F7F}{https://anonymous.4open.science/r/GraphIPBench-7F7F}.
Primary Area: datasets and benchmarks
Submission Number: 8395
Loading