Abstract: Structure-enhanced models get a leading performance on the link prediction task as they utilize selected structure features and Graph Neural Network (GNN) based node embeddings simultaneously. However, we observe that when graphs get sparser, these methods perform worse than classical GNN-based methods, which has a severe impact on their practical use. We prove that when the graph gets sparser, the distance between structure features gets smaller. We induce this is the underlying reason for hindering the model from giving a reasonable prediction and leading to performance degeneration. To overcome this problem, we first claim that models need to learn the importance of node embeddings based on the distance between structure features. However, there is a lack of research to efficiently estimate the distance information, and existing models fail to assign the importance properly. Then we design a method called DIP, which satisfies the relation requirement with a weighted term of node embeddings, and use node degree to estimate the distance information to get the weight. Experimental results show that DIP can significantly improve the accuracy of structure-enhanced link prediction models and solve the performance degeneration problem effectively. The code of DIP is publicly available at https://github.com/lzwqbh/DIP.
External IDs:dblp:conf/ijcnn/LiDZFGM25
Loading