Track: Social networks and social media
Keywords: link prediction, multi-layer networks, shared-latent space, adversarial training, orthogonal fusion
Abstract: Link prediction in multi-layer networks is a longstanding issue that predicts missing links based on the observed structures across all layers. Existing link prediction methods in multi-layer network typically merge the multi-layer network into a single-layer network and/or perform explicit calculations using intra-layer and inter-layer similarity metrics. However, these approaches often overlook the role of coupling in multi-layer networks, specifically the shared information and latent relationships between layers, which in turn limits prediction performance. This calls the need for methods that can extract representations in a shared-latent space to enhance inter-layer information sharing and prediction performance. In this paper, we propose a novel end-to-end framework namely: Link prediction Utilizing Shared-laTent spacE Representation (LUSTER) in multi-layer networks. LUSTER consists of four key modules: the representation extractor, the latent space learner, the complementary enhancer, and the link predictor. The representation extractor focuses on learning the intra-layer representations of each layer, capturing the data characteristics within the layer. The latent space learner {extracts representations from the shared-latent space across different network layers} through adversarial training. The complementary enhancer combines the intra-layer representations and the shared-latent space representations through orthogonal fusion, providing comprehensive information. Finally, the link predictor uses the enhanced representations to predict missing links. Extensive experimental analyses demonstrate that LUSTER outperforms state-of-the-art methods for link prediction in multi-layer networks, improving the AUC metric by up to 15.87%.
Submission Number: 1584
Loading