Multi-view constraint disentangled GAT for recommendation

Published: 2025, Last Modified: 28 Oct 2025Neurocomputing 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Disentangled Graph Convolutional Networks (disentangled GCNs) can explicitly learn embeddings according to users’ intent, which helps increase both the accuracy and interpretability of recommendations. However, traditional disentangled GCNs primarily rely on first-order direct interaction views, neglecting high-order views and type information, leading to mutual confusion among heterogeneous embeddings and inaccurate representations. Moreover, the aggregation methods used in these models fail to distinguish the varying importance of different nodes and their corresponding intents, which degrades the quality of the learned representations. To address the above problems, we propose the Multi-view Constraint Disentangled Graph Attention Network (MC-DGAT). Specifically, MC-DGAT integrates intent disentanglement and multi-view constraint mechanisms into the graph attention network. The model incorporates high-order interaction views, enabling the capture of multi-hop relationships among homogeneous nodes, which strengthens the model's ability to discern implicit high-order connections and enhances robustness and generalization. Additionally, attention weights are dynamically assigned to reflect the varying importance of different neighboring nodes and intents, leading to more accurate node embeddings and enhanced recommendation accuracy. The experiments on three real-world datasets demonstrate the superiority of the proposed approach. Our code is available at https://github.com/lustrelake/MC-DGAT.
Loading