TL;DR: We extend prior theoretical results about quantifying weak-to-strong generalization beyond L2 loss, and verify our results with experiments.
Abstract: The paradigm of weak-to-strong generalization constitutes the training of a strong AI model on data labeled by a weak AI model, with the goal that the strong model nevertheless outperforms its weak supervisor on the target task of interest. For the setting of real-valued regression with the squared loss, recent work quantitatively characterizes the gain in performance of the strong model over the weak model in terms of the misfit between the strong and weak model. We generalize such a characterization to learning tasks whose loss functions correspond to arbitrary Bregman divergences when the strong class is convex. This extends the misfit-based characterization of performance gain in weak-to-strong generalization to classification tasks, as the cross-entropy loss can be expressed in terms of a Bregman divergence. In most practical scenarios, however, the strong model class may not be convex. We therefore weaken this assumption and study weak-to-strong generalization for convex combinations of $k$ strong models in the strong class, in the concrete setting of classification. This allows us to obtain a similar misfit-based characterization of performance gain, up to an additional error term that vanishes as $k$ gets large. Our theoretical findings are supported by thorough experiments on synthetic as well as real-world datasets.
Lay Summary: (1) Previous work identified an interesting phenomenon where more complex student AI models could outperform their teachers if the teachers were smaller and less complex. This phenomenon is called weak-to-strong generalization, and it is unexpected, since it is unclear how the more complex model is deciding to correctly deviate from its teacher. (2) We extended prior work that analyzed this phenomenon with geometric tools to much more general settings. This now aligns the geometric ideas with practical use cases. (3) Understanding the mechanisms that govern weak-to-strong generalization allow us to safely build superhuman AI models that still align with our core values. In that situation, humans are the less complex teachers and the superhuman AI models are the more complex learners.
Link To Code: https://github.com/abhmul/general-misfit-gain
Primary Area: General Machine Learning->Unsupervised and Semi-supervised Learning
Keywords: Weak-to-Strong Generalization, Bregman Divergence, Information Geometry, Alignment, Large-Language Models
Submission Number: 9489
Loading