Learning When Not to Learn: Risk-Sensitive Abstention in Bandits with Unbounded Rewards
TL;DR: We show that typical bandit exploration under unbounded rewards can cause catastrophic failures, and propose a caution-based algorithm that avoids such errors while achieving sublinear regret.
Abstract: In high-stakes AI applications, even a single action can cause irreparable damage. However, nearly all of sequential decision-making theory assumes that all errors are recoverable (e.g., by bounding rewards). Standard bandit algorithms that explore aggressively may cause irreparable damage when this assumption fails. Some prior work avoids irreparable errors by asking for help from a mentor, but a mentor may not always be available. In this work, we propose a contextual bandit model with unbounded rewards and no mentor, but with an abstention option. We provide a risk-sensitive algorithm that explores cautiously without risking irreparable errors. Under some conditions, we establish sublinear regret guarantees, theoretically demonstrating the effectiveness of cautious exploration for deploying learning agents safely in high-stakes environments.
Submission Number: 957
Loading