Keywords: Maximum Likelihood Estimation, Imitation Learning, Contextual Bandits
Abstract: We study the problem of learning to generate an answer (or completion) to a question (or prompt), where there could be multiple correct answers, any one of which is acceptable at test time. Learning is based on demonstrations of some correct answer to each training question, as in Supervised Fine Tuning (SFT). We formalize the problem as apprenticeship learning (i.e., imitation learning) in contextual bandits, with offline demonstrations from some expert (optimal, or very good) policy, without explicitly observed rewards. In contrast to prior work, which assumes the demonstrator policy belongs to a low-complexity class, we propose relying only on the underlying reward model (i.e., specifying which answers are correct) being in a low-cardinality class, which we argue is a weaker assumption. We show that likelihood-maximization methods can fail in this setting, and instead present an approach that learns to answer nearly as well as the demonstrator, with sample complexity logarithmic in the cardinality of the reward class. Our method is similar to Syed and Schapire (NIPS 2007), when adapted to a contextual bandit (i.e., single step) setup, but is a simple one-pass online approach that enjoys an ``optimistic rate'' (i.e., $1/\varepsilon$ when the demonstrator is optimal, versus $1/\varepsilon^2$ in Syed and Schapire) and works even with arbitrarily adaptive demonstrations.
Submission Number: 93
Loading