Solving the Traveling Salesman Problem with Positional Encoding

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural Combinatorial Optimization, Traveling Salesman Problem, Positional Encoding
TL;DR: Positional encodings from language models provide powerful biases for neural TSP solvers, enabling state-of-the-art results up to 10 000 cities.
Abstract: We propose transformer-based neural solvers for the Euclidean Traveling Salesman Problem (TSP) that rely on positional encodings rather than coordinate projections. By adapting ALiBi and RoPE, modern positional encodings originally developed for large language models, to the Euclidean setting, our **Positional Encoding-based Neural Solvers (PENS)** inherit useful invariances and locality biases. To address the increased density of large instances, we introduce a simple yet effective rescaling of city coordinates that further boosts performance. Trained only on TSP-100, PENS achieves **state-of-the-art results on up to 10 000 cities**, a scale that was previously dominated by methods requiring graph sparsification. These findings demonstrate that positional encodings provide effective inductive biases for neural combinatorial optimization.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 12126
Loading