PIPer: On-Device Environment Setup via Online Reinforcement Learning

ICLR 2026 Conference Submission21217 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, reinforcement learning, environment setup, machine learning for software engineering
TL;DR: We've applied online reinforcement learning with verifiable rewards to train Qwen3-8B for enviornment setup task, surpassed GPT-4o-mini and comparable with 32B and GPT-4o models on EnvBench.
Abstract: Environment setup–the process of configuring the system to work with a specific software project–represents a persistent challenge in Software Engineering (SE). Automated environment setup methods could assist developers by providing fully configured environments for arbitrary repositories without manual effort. This also helps SE researchers to scale execution-based benchmarks. However, recent studies reveal that even state-of-the-art Large Language Models (LLMs) achieve limited success in automating this task. To address this limitation, we tune a specialized model for environment setup. We combine supervised fine-tuning for generating correct Bash scripts and Reinforcement Learning with Verifiable Rewards (RLVR) to adapt it to the task of environment setup. On EnvBench-Python, our method enables Qwen3-8B (a model runnable on consumer hardware) to perform on par with larger models–Qwen3-32B and GPT-4o. The training code and model checkpoints are available online: https://github.com/PIPer-iclr/PIPer.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 21217
Loading