An Expert in Residence: LLM Agents for Always-On Operating System Tuning

NeurIPS 2025 Workshop MLForSys Submission6 Authors

Published: 30 Oct 2025, Last Modified: 15 Nov 2025MLForSys2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM Agents, Operating System Tuning, Online Optimization
TL;DR: We introduce an online, LLM-driven agent for continuous operating system tuning which, in a case study on the Linux CPU scheduler (CFS), converges faster and more stably than Bayesian optimization, reinforcement learning, and a human expert.
Abstract: Classical machine-learning auto-tuners for OS control struggle with semantic gaps, brittle rewards, and unsafe exploration. We introduce an online, LLM-driven agent that emulates expert reasoning for continuous OS optimization. When tuning the Linux Completely Fair Scheduler’s hyperparameters, the agent outperforms Bayesian optimization by 5\% in single-parameter tuning, 7.1\% in two-parameter co-tuning, and a human expert by 2.98\% overall, while converging faster and adapting more quickly to workload changes. When application counters are unavailable, system-level proxies (e.g., Instructions Per Cycle (IPC)) preserved tail latency in our setup. Putting this together, we propose adopting the Model Context Protocol (MCP) for tool/resource discovery and invocation and a logging channel; on top of that, we propose adding transactional apply--commit--revert, host-mediated approval gates, and policy controls in the OS-tuning server and host to ensure safe, auditable operation. Our results and reference design suggest a practical path toward safe, self‑adapting OS control.
Submission Number: 6
Loading