Multi-Difficulty Measure Curriculum Learning for Heterogeneous Graphs with Noise

TMLR Paper4146 Authors

05 Feb 2025 (modified: 11 Feb 2025)Withdrawn by AuthorsEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The use of heterogeneous graphs has gained significant traction for modeling and analyzing complex systems across diverse domains because of their ability to represent various types of entities and relationships. However, these graphs face considerable challenges due to different types of noise, including node feature noise, edge noise, and label noise, which arise from data collection imperfections, inconsistent labeling processes, and graph construction errors. These noises significantly undermine the performance of Graph Neural Networks (GNNs), which rely on high-quality data to learn meaningful patterns. In this paper, we address these challenges by investigating the integration of Curriculum Learning (CL) to enhance the robustness of GNNs against multiple forms of noise in heterogeneous graphs. We propose a novel approach, Multi-Difficulty Measure Curriculum Learning (MDCL), which adaptively incorporates diverse difficulty measures to capture various aspects of heterogeneous graphs, including node features, topological structures, and training dynamics. MDCL utilizes an adaptive weighting mechanism to dynamically balance these difficulty measures, optimizing the learning process in the presence of complex noise. Empirical evaluations on benchmark datasets and GNN architectures demonstrate that MDCL consistently improves the accuracy and robustness of GNNs in scenarios with diverse noise types, establishing it as a promising solution for real-world applications involving heterogeneous graphs.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Chuxu_Zhang2
Submission Number: 4146
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview