Revisiting Online Learning Approach to Inverse Linear Optimization: A Fenchel–Young Loss Perspective and Gap-Dependent Regret Analysis

Published: 22 Jan 2025, Last Modified: 25 Feb 2025AISTATS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper revisits the online learning approach to inverse linear optimization studied by Bärmann et al. (2017), where the goal is to infer an unknown linear objective function of an agent from sequential observations of the agent's input-output pairs. First, we provide a simple understanding of the online learning approach through its connection to online convex optimization of *Fenchel–Young losses*. As a byproduct, we present an offline guarantee on the *suboptimality loss*, which measures how well predicted objective vectors explain the agent's choices, without assuming the optimality of the agent's choices. Second, assuming that there is a gap between optimal and suboptimal objective values in the agent's decision problems, we obtain an upper bound independent of the time horizon $T$ on the sum of suboptimality and *estimate losses*, where the latter measures the quality of solutions recommended by predicted objective vectors. Interestingly, our gap-dependent analysis achieves a faster rate than the standard $O(\sqrt{T})$ regret bound by exploiting structures specific to inverse linear optimization, even though neither the loss functions nor their domains possess desirable properties, such as strong convexity.
Submission Number: 17
Loading