Machines and Mathematical Mutations: Using GNNs to Characterize Quiver Mutation Classes

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We train graph neural networks to solve a problem arising in abstract algebra and then use interpretability techniques to independently discover a theorem
Abstract: Machine learning is becoming an increasingly valuable tool in mathematics, enabling one to identify subtle patterns across collections of examples so vast that they would be impossible for a single researcher to feasibly review and analyze. In this work, we use graph neural networks to investigate quiver mutation---an operation that transforms one quiver (or directed multigraph) into another---which is central to the theory of cluster algebras with deep connections to geometry, topology, and physics. In the study of cluster algebras, the question of mutation equivalence is of fundamental concern: given two quivers, can one efficiently determine if one quiver can be transformed into the other through a sequence of mutations? In this paper, we use graph neural networks and AI explainability techniques to independently discover mutation equivalence criteria for quivers of type $\tilde{D}$. Along the way, we also show that even without explicit training to do so, our model captures structure within its hidden representation that allows us to reconstruct known criteria from type $D$, adding to the growing evidence that modern machine learning models are capable of learning abstract and general rules from mathematical data.
Lay Summary: As machine learning is applied to more and more tasks, mathematics offers a unique opportunity. On one hand, ML can help mathematicians sift through mountains of examples. On the other hand, mathematics has clear rules and logic that help us understand what ML models are doing internally. In this work, we applied a GNN---a special ML model that works on networks---to a math problem with deep connections to algebra and physics. We then use AI explainability tools to interpret the results, recovering a nontrivial theorem about this problem and independently recovering another result. Our results show that ML models can learn abstract rules on mathematical problems.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Primary Area: Applications->Everything Else
Keywords: AI for Math, Interpretability, Graph neural networks, Algorithmic reasoning
Submission Number: 3962
Loading