Thermal and Energy Management with Fan Control Through Offline Meta-Reinforcement Learning

Published: 22 Oct 2024, Last Modified: 23 Oct 2024NeurIPS 2024 Workshop Open-World Agents PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning; Control System; Open World Agents
TL;DR: A meta learning method for fan control to open world environment.
Abstract: Reinforcement learning has garnered significant attention across various fields, including computer vision, natural language processing, and robotics. In this work, we explore the potential of applying reinforcement learning to open-world agents through an empirical study of three distinct offline meta-reinforcement learning approaches for fan control, with a focus on thermal and energy management. Our models enable adaptive fan speed control, which not only protects devices from overheating but also effectively reduces power consumption. To better evaluate the performance in open-world scenarios, we go beyond the industry-standard steady-state test by conducting a CPU-stress test that simulates a more dynamic and unpredictable deployment environment. Compared to commercially available techniques, our solution achieves up to a 21\% reduction in power consumption on a real 2U-server under the worst thermal conditions. This approach demonstrates the broader applicability of meta-reinforcement learning in the thermal and energy management of server systems, particularly in open-world settings.
Submission Number: 5
Loading