Keywords: Reinforcement learning, Algebraic Multigrid, Graph neural networks, AMG coarsening, Sparse linear systems, PDEs
TL;DR: We introduce CoarseRL, a graph-based reinforcement learning framework that learns effective AMG coarsening policies, achieving performance comparable to classical heuristics across a range of diffusion problems.
Abstract: Solving large sparse linear systems $\mathbf{A}\mathbf{x}=\mathbf{b}$ is central to many scientific and engineering applications. Algebraic Multigrid (AMG) achieves optimal linear complexity for suitable problems, but its performance critically depends on coarsening strategies that are largely heuristic-driven and sensitive to anisotropy, heterogeneity, and problem geometry. We introduce \textsc{CoarseRL}, a graph-based reinforcement learning (RL) framework that learns coarse--fine (CF) splitting policies directly from the sparse matrix $\mathbf{A}$. CF splitting is formulated as a sequential decision process on the matrix graph, in which an agent selects coarse variables to optimize cumulative reward signals derived from classical AMG principles such as diagonal dominance. We present a systematic empirical study evaluating combinations of two RL algorithms, two GNN architectures, multiple reward formulations, and a range of diffusion and anisotropic diffusion problems on both structured and unstructured meshes. Our experiments show that \textsc{CoarseRL} can achieve coarsening quality comparable to, and in many cases exceeding, that of classical greedy heuristics. These findings provide practical insights and guidelines for applying RL to AMG coarsening and demonstrate a reproducible
pathway toward data-driven, robust coarsening algorithms for large-scale PDE simulations. \footnote{Code and datasets will be released upon publication.
Journal Opt In: Yes, I want to participate in the IOP focus collection submission
Journal Corresponding Email: soha13yusuf@gmail.com
Submission Number: 107
Loading