FRAM: Frobenius-Regularized Assignment Matching with Mixed-Precision Computing

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: graph matching, network alignment, quadratic assignment problem, linear assignment problem
TL;DR: We present a graph matching algorithm that mitigates relaxation bias through a regularization term and achieves substantial acceleration via an innovative mixed-precision architecture.
Abstract: Graph matching, usually cast as a discrete Quadratic Assignment Problem (QAP), aims to identify correspondences between nodes in two graphs. Since QAP is NP-hard, many methods its discrete constraints by projecting the discrete feasible set onto its convex hull and solving the resulting continuous problem. However, these relaxations inevitably enlarge the feasible set and introduce two errors: sensitivity to numerical scales and geometric misalignment between the relaxed and original feasible domains. To address these issues, we propose a novel relaxation framework to reformulate the projection step as a Frobenius-Regularized Linear Assignment (FRA) problem. This formulation incorporates a tunable regularization term to curb the inflation of the feasible region and ensure numerical scale invariance. To solve the FRA efficiently, we introduce a scaling algorithm for doubly stochastic normalization. Leveraging its favorable computational properties, we design a theoretically grounded, accelerated mixed-precision algorithm. Building on these components, we propose Frobenius-Regularized Assignment Matching (FRAM), which approximates the QAP solution through a sequence of FRA problems. Extensive CPU experiments show that FRAM consistently outperforms all baselines. On GPUs, with mixed precision, FRAM achieves up to a 370× speedup over its FP64 CPU implementation without sacrificing accuracy.
Primary Area: Optimization (e.g., convex and non-convex, stochastic, robust)
Submission Number: 21851
Loading