TL;DR: Inference time objectives can be directly optimized via RL
Abstract: In this work, we investigate the merits of explicitly optimizing for inference time algorithmic performance during model training. We show how optimizing for inference time performance can improve overall model efficacy. We consider generic inference time objectives with $k$ samples, with focus on pass@$k$ and majority voting as two main applications. With language model training on reasoning datasets, we showcase the performance trade-off enabled by training with such objectives. When training on code generation tasks, we show that the approach significantly improves pass@$k$ objectives compared to the baseline method.
Lay Summary: Learning to do better inference time computation with Reinforcement Learning
Primary Area: Reinforcement Learning->Deep RL
Keywords: RL, inference time compute, pass at k, code generation
Submission Number: 10164
Loading