Keywords: Promot-Completion, Imitation Learning, Likelihood Maximization
TL;DR: Taking a learning-theoretic view of SFT, we rethink existing modeling assumptions. Under relaxed assumptions on the reward model's capacity, we show MLE fails and design a new optimal learner.
Abstract: We study the problem of learning to generate an answer (or completion) to a question (or prompt), where there could be multiple correct answers, any one of which is acceptable at test time. Learning is based on demonstrations of some correct answer to each training question, as in Supervised Fine Tuning (SFT). We formalize the problem as offline imitation learning in contextual bandits, with demonstrations from some optimal policy, without explicitly observed rewards. Prior work assumes that the demonstrator belongs to a low-complexity policy class, which motivates maximum likelihood estimation (i.e., log-loss minimization). In contrast, we propose relying only on the reward model (specifying which answers are correct) being in a low-cardinality class, which we argue is a weaker assumption. We show that likelihood maximization methods can fail in this case, and instead suggest an alternative novel approach that learns with sample complexity logarithmic in the cardinality of the reward class. Our approach and guarantees are robust and apply even when learning from arbitrary demonstrators and to the relaxed $\mathsf{pass}$-$k$ error setting. Our work motivates looking beyond likelihood maximization when learning from demonstrations.
Primary Area: learning theory
Submission Number: 15326
Loading