Mixture of Experts Characteristic Function Embeddings for Heterogeneous Fraud Graphs

17 Sept 2025 (modified: 15 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mixture-of-Experts, Characteristic Functions, Graph Representation Learning, Fraud Detection, Heterogeneous Graphs
TL;DR: Decouple-then-fuse node representation learning for fraud detection
Abstract: Fraud detection over heterogeneous graphs requires reasoning over multiplex relations, attribute polymorphism, and structural heterophily, yet prevailing detectors entangle these orthogonal biases into monolithic pipelines assumed to generalize across domains. We address this limitation by instantiating a decouple–then–fuse representation learning that isolates structural and attribute channels before reintroducing interaction through an adaptive fusion interface. On the structural side, we encode distributional neighborhood context via characteristic-function signatures compressed through randomized spectral factorization; on the attribute side, we deploy input-adaptive Mixture-of-Experts projections that specialize each instance to role-conditioned patterns. The two views are subsequently reconciled through a Bayesian mean–difference fusion layer that models per-node consensus and discrepancy, enabling calibrated integration under heterophily and cross-modal conflict. Empirical evaluation across benchmark fraud graphs from telecom records, e-commerce reviews, and cryptocurrency transactions domains shows improved fraud node detection performance, attributable to the model’s ability to disentangle structural and attribute features and reconcile their discrepancies.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 8321
Loading