MazeNet: An Accurate, Fast, & Scalable Deep Learning Solution for Steiner Minimum Trees

26 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Recurrent Convolutional Neural Networks (RCNNs), Obstacle-Avoiding Rectilinear Steiner Minimum Tree (OARSMT), Deep learning for maze-solving, Search algorithm for termination condition, Graph-to-image transformation
Abstract: The Obstacle Avoiding Rectilinear Steiner Minimum Tree (OARSMT) problem, which seeks the shortest interconnection of a given number of terminals in a rectilinear plane while avoiding obstacles, is a critical task in integrated circuit design, network optimization, and robot path planning. Since OARSMT is NP-hard, exact algorithms scale poorly with the number of terminals, leading practical solvers to sacrifice accuracy for large problems. However, for smaller-scale environments, there is no justification for failing to discover the true shortest path. To address this gap, we propose and study MazeNet, a deep learning-based method that learns to solve the OARSMT from data. MazeNet reframes OARSMT as a maze-solving task that can be addressed with a recurrent convolutional neural network (RCNN). A key hallmark of MazeNet is its ability to generalize: we only need to train the RCNN blocks on mazes with a small number of terminals; mazes with a larger number of terminals can be solved simply by replicating the same pre-trained blocks to create a larger network. Across a wide range of experiments, MazeNet achieves perfect OARSMT-solving accuracy with substantially reduced runtime compared to classical exact algorithms, and its perfect accuracy ensures shorter path lengths compared to state-of-the-art approximation algorithms.
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7863
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview