Abstract: Graph meta-learning models can fast adapt to new tasks with extremely limited labeled data by learning transferable meta knowledge and inductive bias on graph. Existing methods construct meta-training tasks with abundant labeled nodes from base classes, which limit the application scenarios of graph meta-learning. Therefore, we propose an unsupervised graph meta-learning framework via local subgraph augmentation (UMLGA). Specifically, we firstly propose a graph clustering-based sampling method to sample anchor nodes from different natural classes and extract corresponding local subgraphs. Then, supposing that the generated augmentation samples share the same labels, we design structure-wise and feature-wise graph augmentation strategies to generate diverse augmentation subgraphs while keeping the semantics unchanged. Finally, we perform meta-training on the unsupervised constructed tasks with weighted meta-loss, which can extract cross-tasks knowledge for fast adaption to novel classes. To evaluate the effectiveness of UMLGA, series of experiments are conducted on four real-world graph datasets. Experiment results show that, even without relying on extensive labeled data, UMLGA can achieve comparable and even better few-shot node classification performance comparing with the supervised graph meta-learning backbone models. With GPN as the backbone model, the improvements of UMLGA are respectively 3.0\(\sim \)9.3%, 4.4\(\sim \)11.6%, -1.2\(\sim \)9.3%, and 1.8\(\sim \)15.1% on Amazon-Clothing, Amazon-Electronics, DBLP, and ogbn-products datasets.
External IDs:dblp:journals/apin/HuangZZXL26
Loading