Multi-class and Multi-task Strategies for Neural Directed Link Prediction

Published: 2025, Last Modified: 20 Jan 2026ECML/PKDD (8) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Link Prediction is a foundational task in Graph Representation Learning, supporting applications like link recommendation, knowledge graph completion, and graph generation. Graph Neural Networks (GNNs) have shown promising results in this domain and are widely used for learning graph representations from data. However, not all GNNs can represent edge direction and not all training strategies support the learning of directed graph representations. For this reason, Neural Directed Link Prediction (NDLP) has been divided into three sub-tasks. Models that perform well in distinguishing uncorrelated samples of positive and negative directed edges (the general DLP task) do not necessarily capture edge directionality and bidirectionality (two additional tasks). While many models can be trained to perform well on any of the sub-tasks, most fail to perform well across them all. In this work, we propose three novel training strategies that adress the three sub-tasks simultaneously and can be applied to any autoencoder-based GNN without motifying its architecture. Our first strategy, the Multi-Class Framework for Neural Directed Link Prediction (MC-NDLP) maps NDLP to a Multi-Class training objective. The second and third strategies adopt a Multi-Task perspective, either with a Multi-Objective (MO-NDLP) or a Scalarized (S-NDLP) approach. We show that these training strategies allow many models to achieve higher or more balanced performance across the three NDLP sub-tasks. The flexibility offered by our proposed training strategies provides a powerful means for improving the capabilities of NDLP models to advance the state of Neural Directed Link Prediction.
Loading