Keywords: Mining graphs, Graph Convolutional Networks, Model degradation
Abstract: Graph Convolutional Networks (GCNs) are the basic architecture for handling graph-structured data. Deeper GCNs are required for large and sparse graph data. As the number of layers increases, the performance of GCNs degrades, which is commonly attributed to over-smoothing but is constantly debated. In this paper, we eliminate the equivalence between model degradation and over-smoothing or gradient vanishing and propose a systematic solution, an Adaptive DeepGCN (ADGCN) architecture, which makes the model the potential to address all issues. We place learnable parameters at the appropriate locations to make adaptive adjustments to different graph-structured data. We conduct experiments on real-world datasets to verify the stability and adaptability of our architecture.
5 Replies
Loading