REFINE: Enabling Efficient and Trustworthy Modeling of Financial Networks via GNN-to-MLP Knowledge Distillation

Published: 2025, Last Modified: 05 Feb 2026DSAA 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph Neural Networks (GNNs) have emerged as powerful tools for modeling financial data as networks, effectively capturing both individual attributes and complex relationships. However, their inherent message-passing and aggregation operations introduce significant inference latency, limiting their applicability in latency-sensitive domains such as finance, healthcare, and robotics. Recent efforts have attempted to mitigate this limitation by distilling GNN knowledge into more efficient Multi-Layer Perceptrons (MLPs). While promising in reducing inference costs, existing GNN-to-MLP distillation approaches face three critical challenges: (1) reliance on labeled data, (2) limited robustness to noisy or perturbed inputs due to the absence of structural information, and (3) the existence of representational bias. To address these issues, we propose REFINE, a novel self-supervised GNN-to-MLP knowledge distillation framework. Our method enhances model stability and fairness through structure-free feature augmentations, including noise injection and counterfactual generation. Extensive experiments on two real-world financial datasets and one social network benchmark demonstrate that our approach consistently outperforms existing distillation baselines, achieving a favorable trade-off between predictive utility, stability, and fairness.
Loading