On Making Graph Continual Learning Easy, Fool-Proof, and Extensive: a Benchmark Framework and ScenariosDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Graph Continual Learning, Continual Learning Benchmark Framework
Abstract: Continual Learning (CL) is the process of learning ceaselessly a sequence of tasks. Most existing CL methods deal with independent data (e.g., images and text) for which many benchmark frameworks and results under standard experimental settings are available. CL methods for graph data, however, are surprisingly underexplored because of (a) the lack of standard experimental settings, especially regarding how to deal with the dependency between instances, (b) the lack of benchmark datasets and scenarios, and (c) high complexity in implementation and evaluation due to the dependency. In this paper, regarding (a), we define four standard incremental settings (task-, class-, domain-, and time-incremental settings) for graph data, which are naturally applied to many node-, edge-, and graph-level problems. Regarding (b), we provide 17 benchmark scenarios based on nine real-world graphs. Regarding (c), we develop BEGIN, an easy and fool-proof framework for graph CL. BEGIN is easily extended since it is modularized with reusable modules for data processing, algorithm design, validation, and evaluation. Especially, the evaluation module is completely separated from user code to eliminate potential mistakes in evaluation. Using all above, we report extensive benchmark results of seven graph CL methods. Compared to the latest benchmark for graph CL, using BEGIN, we can cover 2.75× more combinations of incremental settings and levels of problems, and, we can implement the same the graph CL method with about 30% fewer lines of code.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Infrastructure (eg, datasets, competitions, implementations, libraries)
TL;DR: We present BEGIN, an easy-to-use, fool-proof, and extensive benchmark framework for graph continual learning
22 Replies

Loading