Genetic Curriculum Learning for Distribution Generalization on the Travelling Salesman Problem

Published: 10 Oct 2024, Last Modified: 31 Oct 2024MATH-AI 24EveryoneRevisionsBibTeXCC BY 4.0
Keywords: combinatorial optimization, mathematical reasoning, evolutionary algorithms, travelling salesman problem, deep reinforcement learning
TL;DR: We propose a genetic curriculum for improving TSP model robustness on distributions of practical interest, filling in where LLMs and prior NCO methods fall short.
Abstract:

The Travelling Salesman Problem (TSP) is a classic NP-hard combinatorial optimization task with numerous practical applications. Classic heuristic solvers - and Large Language Models (LLMs) using such solvers as a tool - can attain near-optimal performance for small problem instances, but become computationally intractable for larger problems. Real-world logistics problems such as dynamically re-routing last-mile deliveries demand a solver with fast inference time, which has led to specialized neural network solvers being favored in practice. However, neural networks struggle to generalize beyond the synthetic data they were trained on. In particular, we show that there exist TSP distributions that are realistic in practice, which also consistently lead to poor worst-case performance for existing neural approaches. To address distributional robustness, we present Genetic Curriculum Learning (GCL), an efficient novel approach utilizing automatic curricula. We also present TSPLib50, a dataset of realistically distributed TSP samples, which tests real-world distribution generalization ability without conflating this issue with TSP instance size. We evaluate our method on various synthetic datasets as well as TSPLib50, and compare to state-of-the-art LLM results and neural baselines. We demonstrate that GCL improves distributional robustness, with most of its performance gains coming from worst-case scenarios.

Concurrent Submissions: N/A
Submission Number: 97
Loading