Abstract: Node embeddings are low-dimensional vectors that capture node properties, typically learned through unsupervised structural similarity objectives or supervised tasks. While recent efforts have focused on post-hoc explanations for graph models, intrinsic interpretability in unsupervised node embeddings remains largely underexplored. To bridge this gap, we introduce DiSeNE (Disentangled and Self-Explainable Node Embedding), a framework that learns self-explainable node representations in an unsupervised fashion. By leveraging disentangled representation learning, DiSeNE ensures that each embedding dimension corresponds to a distinct topological substructure of the graph, thus offering clear, dimension-wise interpretability. We introduce new objective functions grounded in principled desiderata, jointly optimizing for structural fidelity, disentanglement, and human interpretability. Additionally, we propose several new metrics to evaluate representation quality and human interpretability. Extensive experiments on multiple benchmark datasets demonstrate that DiSeNE not only preserves the underlying graph structure but also provides transparent, human-understandable explanations for each embedding dimension.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Min_Wu2
Submission Number: 4600
Loading