Reinforcement Learning with Intrinsically Motivated Feedback Graph for Lost-sales Inventory Control
Abstract: Reinforcement learning (RL) has proven to be well-performed and versatile in inventory control (IC). However, further improvement of RL algorithms in the IC domain is impeded by two limitations of online experience. First, online experience is expensive to acquire in real-world applications. With the low sample efficiency nature of RL algorithms, it would take extensive time to collect enough data and train the RL policy to convergence. Second, online experience may not reflect the true demand due to the lost-sales phenomenon typical in IC, which makes the learning process more challenging. To address the above challenges, we propose a training framework that combines reinforcement learning with feedback graph (RLFG) and intrinsically motivated exploration (IME) to boost sample efficiency. In particular, we first leverage the MDP structure inherent in lost-sales IC problems and design the feedback graph (FG) tailored to lost-sales IC problems to generate abundant side experiences aiding in RL updates. Then we conduct a rigorous theoretical analysis of how the designed FG reduces the sample complexity of RL methods. Guided by these insights, we design an intrinsic reward to direct the RL agent to explore to the state-action space with more side experiences, further exploiting FG’s capability. Experimental results on single-item, multi-item, and multi-echelon environments demonstrate that our method greatly improves the sample efficiency of applying RL in IC.
Our code is available at https://github.com/Ziffer-byakuya/RLIMFG4IC
Submission Number: 564
Loading