EquivaMap: Leveraging LLMs for Automatic Equivalence Checking of Optimization Formulations

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: A fundamental problem in combinatorial optimization is identifying equivalent formulations. Despite the growing need for automated equivalence checks---driven, for example, by *optimization copilots*, which generate problem formulations from natural language descriptions---current approaches rely on simple heuristics that fail to reliably check formulation equivalence. Inspired by Karp reductions, in this work we introduce *Quasi-Karp equivalence*, a formal criterion for determining when two optimization formulations are equivalent based on the existence of a mapping between their decision variables. We propose *EquivaMap*, a framework that leverages large language models to automatically discover such mappings for scalable, reliable equivalence checking, with a verification stage that ensures mapped solutions preserve feasibility and optimality without additional solver calls. To evaluate our approach, we construct *EquivaFormulation*, the first open-source dataset of equivalent optimization formulations, generated by applying transformations such as adding slack variables or valid inequalities to existing formulations. Empirically, *EquivaMap* significantly outperforms existing methods, achieving substantial improvements in correctly identifying formulation equivalence.
Lay Summary: Optimization problems are often expressed in many different but equivalent ways. For example, the same scheduling or routing problem can be written using different variables or constraints. However, automatically determining whether two such formulations are truly equivalent—meaning they represent the same problem and have the same solutions—is a challenging and important task. In this work, we introduce a new formal notion called Quasi-Karp equivalence to rigorously define when two optimization problems are equivalent through efficient transformations between their variables. To detect this equivalence automatically, we develop EquivaMap, a novel framework that leverages large language models (LLMs) to discover mappings between variables of two problem formulations. We create the first dataset of equivalent optimization problems by applying common transformations, and show that EquivaMap significantly outperforms existing heuristic methods in identifying equivalences, even under complex changes like variable rescaling. Our approach enables reliable verification of AI-generated optimization models, improving trust and interoperability in automated decision-making systems.
Link To Code: https://github.com/HumainLab/EquivaMap
Primary Area: Optimization->Discrete and Combinatorial Optimization
Keywords: Combinatorial Optimization, Large Language Models, Mixed Integer Linear Programming, AI for OR
Submission Number: 5288
Loading