Bad Values but Good Behavior: Learning Highly Misspecified Bandits and MDPs

TMLR Paper4161 Authors

07 Feb 2025 (modified: 21 Apr 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Parametric, feature-based reward models are employed by a variety of algorithms in decision-making settings such as bandits and Markov decision processes (MDPs). The typical assumption under which the algorithms are analysed is realizability, i.e., that the true values of actions are perfectly explained by some parametric model in the class. We are, however, interested in the situation where the true values are (significantly) misspecified with respect to the model class. For parameterized bandits, contextual bandits and MDPs, we identify structural conditions, depending on the problem instance and model class, under which basic algorithms such as $\epsilon$-greedy, LinUCB and fitted Q-learning provably learn optimal policies under even highly misspecified models. This is in contrast to existing worst-case results for, say misspecified bandits, which show regret bounds that incur a linear scaling with time horizon, and shows that there can be a nontrivially large set of bandit instances that are robust to misspecification.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Incorporated the changes requested by all the reviewers (wherever possible)
Assigned Action Editor: ~Branislav_Kveton1
Submission Number: 4161
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview