Keywords: Graph Neural Networks, Traveling Salesman Problem, Combinatorial Optimization, Generalization, Rethinking
Abstract: Graph Convolutional Networks (GCNs) have demonstrated strong potential to address the Traveling Salesman Problem (TSP). However, existing GCN-based TSP solvers still struggle with limited generalization, overfitting, and extending to asymmetric TSPs. To address these challenges, we rethink how to enable models to learn unified and generalizable representations for TSPs. Specifically, we introduce three encoding strategies: global node embedding for **Input Unification**, Min-Max Scaling for universal **Edge Normalization**, and layer-wise expanding views for **Aggregation Enhancement**. These designs culminate in **UNE-GCN**, a model that learns generalizable TSP representations with strong robustness, favorable learning dynamics, and linear scalability. Extensive experiments show that UNE-GCN can guide LKH-3 to perform more efficient searches, and the two-stage framework of UNE-GCN + LKH-3 achieves superior solutions with less search time on both symmetric and asymmetric TSPs. To demonstrate the effectiveness of our encoding strategies, we apply experimental comparison and discuss different encoding schemes to unveil and validate the critical roles of **U-N-E** in the advancement of GCN-based TSP solvers. Experimental results demonstrate that UNE-GCN achieves state-of-the-art performance, with up to a **0.60%** improvement in Gap over plain LKH-3 on large-scale ATSPs and an **83%** reduction in error metric compared to the original GCN backbone on large-scale STSPs, providing insights for the design of more effective graph encoders.
Primary Area: optimization
Submission Number: 10294
Loading