Keywords: test-time scaling, RL, reasoning, diversity, decoding
TL;DR: Weight ensembling improves pass@k of reasoning models.
Abstract: We investigate a pitfall during the training of reasoning models where the diversity of generations begins to collapse, leading to suboptimal test-time scaling. Notably, Pass@1 reliably improves during supervised finetuning (SFT), but Pass@k rapidly deteriorates. Surprisingly, a simple intervention of interpolating the weights of the latest SFT checkpoint with an early checkpoint, otherwise known as WiSE-FT, almost completely recovers Pass@k while also improving Pass@1. The WiSE-FT variant achieves better test-time scaling (Best@k, majority vote) and achieves superior results with less data when tuned further by reinforcement learning. Finally, we note that WiSE-FT provides complementary gains across performance metrics that is not achievable by diversity-inducing decoding strategies alone, like temperature scaling. We formalize a \emph{bias-variance tradeoff} of Pass@k with respect to the expectation and variance of Pass@1 over the test distribution. We find that WiSE-FT can reduce bias and variance simultaneously, while temperature scaling and possibly other decoding strategies face an inherent tradeoff between decreasing variance with increasing bias.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 1162
Loading