Abstract: Node importance estimation involves assigning a global importance score to each node in a graph, pivotal to various subsequent tasks, including recommendation, network dismantling, etc. Prior research involves pre-training classification tasks using node labels and structural information, followed by computing node importance scores as a downstream regression task. However, a gap exists caused by the inconsistency between the pre-training and downstream tasks, which tends to exert negative transfer. This paper proposes to narrow down the gap for node importance estimation by implementing a multi-view technique, including node-view for context and graph-view for structure. Specifically, in node-view, we devise soft prompts by encoding node information, which enables the model to capture structural features within a semantic context; afterward, the downstream node regression task is aligned with pre-training by inserting prompt patterns. In graph-view, we introduce virtual nodes, which are learnably inserted based on node importance, to create a prompt graph. High-importance nodes in the original graph are linked to more virtual nodes, improving their embeddings in subsequent propagation steps. Such enhancement increases their importance scores in downstream tasks, improving the model's ability to distinguish significant nodes effectively. Additionally, the prompts from different views are fused through multi-view contrastive learning to further enhance the expressiveness of the node embeddings. We empirically evaluate our model on four public datasets, which are shown to outperform other state-of-the-art alternatives significantly and consistently.
External IDs:dblp:journals/tnse/MaFXZ26
Loading