Keywords: mixed-mode ventilation; adversarial inverse reinforcement learning; imitation learning; data-driven modeling
TL;DR: We propose an adversarial inverse reinforcement learning framework for mixed-mode ventilation that learns from rule-based controls to reduce unnecessary window operations and improve comfort and energy efficiency.
Abstract: Mixed-mode ventilation (MMV) control presents a complex decision-making problem due to highly variable outdoor conditions and the need to balance natural ventilation with mechanical cooling. We propose a novel Adversarial Inverse Reinforcement Learning framework for MMV that tackles this complexity by jointly learning a reward function and an adaptive policy from building operational data. Our approach incorporates a physics-constrained neural network model of the MMV environment and a hierarchical policy structure, enabling effective handling of discrete window operations alongside continuous HVAC control. The results show that the learned policy reliably captures the window operation patterns from the rule-based control demonstration, while reducing unnecessary window switching. In addition, the learned policy reduced the temperature comfort range violation from 1.7% to 0.4% compared to the rule-based control. The results demonstrate that the Adversarial Inverse Reinforcement Learning framework can achieve energy-efficient MMV control with significantly fewer window adjustments, thus improving occupant comfort and system stability compared to conventional or heuristic strategies.
Submission Number: 23
Loading