TL;DR: Learning representations on a latent space from observed and structured solutions through a differentiable constrained optimization problem
Abstract: Learning representations for solutions of constrained optimization problems (COPs) with unknown cost functions is challenging, as models like (Variational) Autoencoders struggle to capture constraints to decode structured outputs. We propose an inverse optimization latent variable model (IO-LVM) that constructs a latent space of COP costs based on observed solutions, enabling the inference of feasible and meaningful solutions by reconstructing them with a COP solver in the loop. To achieve this, we leverage estimated gradients of a Fenchel-Young loss through a non-differentiable deterministic solver while shaping the embedding space. In contrast to established Inverse Optimization or Inverse Reinforcement Learning methods, which typically identify a single or context-conditioned cost function, we exploit the learned representation to capture underlying COP cost structures and identify solutions likely originating from different agents or conditions, each using distinct cost functions when making decisions. Using both synthetic and actual ship routing data, we validate our approach through experiments on paths and cycles inference problems, demonstrating the interpretability of the latent space and its effectiveness in path/cycle reconstruction and their distribution prediction.
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: neural networks, constrained optimization, variational autoencoders, path planning
Submission Number: 10494
Loading